SlowGuess commited on
Commit
e813bef
·
verified ·
1 Parent(s): 0e70972

Add Batch 2767a401-5a87-4c08-aba4-c298d2897c97

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 3dmultibodiesfittingsetsofplausible3dhumanmodelstoambiguousimagedata/06b0c760-24d9-4350-b0e0-0b96277eea5f_content_list.json +3 -0
  2. 3dmultibodiesfittingsetsofplausible3dhumanmodelstoambiguousimagedata/06b0c760-24d9-4350-b0e0-0b96277eea5f_model.json +3 -0
  3. 3dmultibodiesfittingsetsofplausible3dhumanmodelstoambiguousimagedata/06b0c760-24d9-4350-b0e0-0b96277eea5f_origin.pdf +3 -0
  4. 3dmultibodiesfittingsetsofplausible3dhumanmodelstoambiguousimagedata/full.md +314 -0
  5. 3dmultibodiesfittingsetsofplausible3dhumanmodelstoambiguousimagedata/images.zip +3 -0
  6. 3dmultibodiesfittingsetsofplausible3dhumanmodelstoambiguousimagedata/layout.json +3 -0
  7. 3dselfsupervisedmethodsformedicalimaging/16d69717-ef6c-44ce-8db2-1247c38adfe6_content_list.json +3 -0
  8. 3dselfsupervisedmethodsformedicalimaging/16d69717-ef6c-44ce-8db2-1247c38adfe6_model.json +3 -0
  9. 3dselfsupervisedmethodsformedicalimaging/16d69717-ef6c-44ce-8db2-1247c38adfe6_origin.pdf +3 -0
  10. 3dselfsupervisedmethodsformedicalimaging/full.md +246 -0
  11. 3dselfsupervisedmethodsformedicalimaging/images.zip +3 -0
  12. 3dselfsupervisedmethodsformedicalimaging/layout.json +3 -0
  13. 3dshapereconstructionfromvisionandtouch/1f94e634-0571-43de-9d4f-fd6aede79b46_content_list.json +3 -0
  14. 3dshapereconstructionfromvisionandtouch/1f94e634-0571-43de-9d4f-fd6aede79b46_model.json +3 -0
  15. 3dshapereconstructionfromvisionandtouch/1f94e634-0571-43de-9d4f-fd6aede79b46_origin.pdf +3 -0
  16. 3dshapereconstructionfromvisionandtouch/full.md +265 -0
  17. 3dshapereconstructionfromvisionandtouch/images.zip +3 -0
  18. 3dshapereconstructionfromvisionandtouch/layout.json +3 -0
  19. abanditlearningalgorithmandapplicationstoauctiondesign/63269d4e-ee93-47ec-8af5-376729555fe9_content_list.json +3 -0
  20. abanditlearningalgorithmandapplicationstoauctiondesign/63269d4e-ee93-47ec-8af5-376729555fe9_model.json +3 -0
  21. abanditlearningalgorithmandapplicationstoauctiondesign/63269d4e-ee93-47ec-8af5-376729555fe9_origin.pdf +3 -0
  22. abanditlearningalgorithmandapplicationstoauctiondesign/full.md +257 -0
  23. abanditlearningalgorithmandapplicationstoauctiondesign/images.zip +3 -0
  24. abanditlearningalgorithmandapplicationstoauctiondesign/layout.json +3 -0
  25. abayesiannonparametricsviewintodeeprepresentations/d8a34f6c-1cc1-4bcb-87ee-43a49156d29d_content_list.json +3 -0
  26. abayesiannonparametricsviewintodeeprepresentations/d8a34f6c-1cc1-4bcb-87ee-43a49156d29d_model.json +3 -0
  27. abayesiannonparametricsviewintodeeprepresentations/d8a34f6c-1cc1-4bcb-87ee-43a49156d29d_origin.pdf +3 -0
  28. abayesiannonparametricsviewintodeeprepresentations/full.md +284 -0
  29. abayesiannonparametricsviewintodeeprepresentations/images.zip +3 -0
  30. abayesiannonparametricsviewintodeeprepresentations/layout.json +3 -0
  31. abayesianperspectiveontrainingspeedandmodelselection/84e1e3b5-a1be-4ac2-99de-64a3ebfb8185_content_list.json +3 -0
  32. abayesianperspectiveontrainingspeedandmodelselection/84e1e3b5-a1be-4ac2-99de-64a3ebfb8185_model.json +3 -0
  33. abayesianperspectiveontrainingspeedandmodelselection/84e1e3b5-a1be-4ac2-99de-64a3ebfb8185_origin.pdf +3 -0
  34. abayesianperspectiveontrainingspeedandmodelselection/full.md +320 -0
  35. abayesianperspectiveontrainingspeedandmodelselection/images.zip +3 -0
  36. abayesianperspectiveontrainingspeedandmodelselection/layout.json +3 -0
  37. abenchmarkforsystematicgeneralizationingroundedlanguageunderstanding/66ecf06f-3cd5-4c86-b564-a5e5f9bf067a_content_list.json +3 -0
  38. abenchmarkforsystematicgeneralizationingroundedlanguageunderstanding/66ecf06f-3cd5-4c86-b564-a5e5f9bf067a_model.json +3 -0
  39. abenchmarkforsystematicgeneralizationingroundedlanguageunderstanding/66ecf06f-3cd5-4c86-b564-a5e5f9bf067a_origin.pdf +3 -0
  40. abenchmarkforsystematicgeneralizationingroundedlanguageunderstanding/full.md +233 -0
  41. abenchmarkforsystematicgeneralizationingroundedlanguageunderstanding/images.zip +3 -0
  42. abenchmarkforsystematicgeneralizationingroundedlanguageunderstanding/layout.json +3 -0
  43. abiologicallyplausibleneuralnetworkforslowfeatureanalysis/5481c27e-1703-4944-80a4-c7b08ceeb4a0_content_list.json +3 -0
  44. abiologicallyplausibleneuralnetworkforslowfeatureanalysis/5481c27e-1703-4944-80a4-c7b08ceeb4a0_model.json +3 -0
  45. abiologicallyplausibleneuralnetworkforslowfeatureanalysis/5481c27e-1703-4944-80a4-c7b08ceeb4a0_origin.pdf +3 -0
  46. abiologicallyplausibleneuralnetworkforslowfeatureanalysis/full.md +378 -0
  47. abiologicallyplausibleneuralnetworkforslowfeatureanalysis/images.zip +3 -0
  48. abiologicallyplausibleneuralnetworkforslowfeatureanalysis/layout.json +3 -0
  49. abooleantaskalgebraforreinforcementlearning/2c300620-8875-40a9-bdce-6433206b3d30_content_list.json +3 -0
  50. abooleantaskalgebraforreinforcementlearning/2c300620-8875-40a9-bdce-6433206b3d30_model.json +3 -0
3dmultibodiesfittingsetsofplausible3dhumanmodelstoambiguousimagedata/06b0c760-24d9-4350-b0e0-0b96277eea5f_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8165a7b88a784c53d250cbc3dd0f50155081879287629422855f5e4901ed14a
3
+ size 72108
3dmultibodiesfittingsetsofplausible3dhumanmodelstoambiguousimagedata/06b0c760-24d9-4350-b0e0-0b96277eea5f_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfdab5c0b4d59a3d89c0c8594051d7742ee2e36c650ca1305163674a099dfffb
3
+ size 90098
3dmultibodiesfittingsetsofplausible3dhumanmodelstoambiguousimagedata/06b0c760-24d9-4350-b0e0-0b96277eea5f_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6fac55b0b974c58f23c341635e35f6ce469f3f010613ab969e7185be7469b492
3
+ size 23131133
3dmultibodiesfittingsetsofplausible3dhumanmodelstoambiguousimagedata/full.md ADDED
@@ -0,0 +1,314 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous Image Data
2
+
3
+ Benjamin Biggs*
4
+
5
+ Department of Engineering
6
+
7
+ University of Cambridge
8
+
9
+ bjb56@cam.ac.uk
10
+
11
+ Sébastien Ehrhardt*
12
+
13
+ Visual Geometry Group
14
+
15
+ University of Oxford
16
+
17
+ hyenal@robots.ox.ac.uk
18
+
19
+ Hanbyul Joo
20
+
21
+ Facebook AI Research
22
+
23
+ Menlo Park
24
+
25
+ hjoo@fb.com
26
+
27
+ Benjamin Graham
28
+
29
+ Facebook AI Research
30
+
31
+ London
32
+
33
+ benjamingraham@fb.com
34
+
35
+ Andrea Vedaldi
36
+
37
+ Facebook AI Research
38
+
39
+ London
40
+
41
+ vedaldi@fb.com
42
+
43
+ David Novotny
44
+
45
+ Facebook AI Research
46
+
47
+ London
48
+
49
+ dnovotny@fb.com
50
+
51
+ # Abstract
52
+
53
+ We consider the problem of obtaining dense 3D reconstructions of humans from single and partially occluded views. In such cases, the visual evidence is usually insufficient to identify a 3D reconstruction uniquely, so we aim at recovering several plausible reconstructions compatible with the input data. We suggest that ambiguities can be modelled more effectively by parametrizing the possible body shapes and poses via a suitable 3D model, such as SMPL for humans. We propose to learn a multi-hypothesis neural network regressor using a best-of-M loss, where each of the M hypotheses is constrained to lie on a manifold of plausible human poses by means of a generative model. We show that our method outperforms alternative approaches in ambiguous pose recovery on standard benchmarks for 3D humans, and in heavily occluded versions of these benchmarks.
54
+
55
+ # 1 Introduction
56
+
57
+ We are interested in reconstructing 3D human pose from the observation of single 2D images. As humans, we have no problem in predicting, at least approximately, the 3D structure of most scenes, including the pose and shape of other people, even from a single view. However, 2D images notoriously [9] do not contain sufficient geometric information to allow recovery of the third dimension. Hence, single-view reconstruction is only possible in a probabilistic sense and the goal is to make the posterior distribution as sharp as possible, by learning a strong prior on the space of possible solutions.
58
+
59
+ Recent progress in single-view 3D pose reconstruction has been impressive. Methods such as HMR [17], GraphCMR [20] and SPIN [19] formulate this task as learning a deep neural network that maps 2D images to the parameters of a 3D model of the human body, usually SMPL [26]. These methods work well in general, but not always (fig. 2). Their main weakness is processing heavily occluded images of the object. When a large part of the object is missing, say the lower body of a sitting human, they output reconstructions that are often implausible. Since they can produce only one hypothesis as output, they very likely learn to approximate the mean of the posterior distribution, which may not correspond to any plausible pose. Unfortunately, this failure modality is rather common in applications due to scene clutter and crowds.
60
+
61
+ In this paper, we propose a solution to this issue. Specifically, we consider the challenge of recovering 3D mesh reconstructions of complex articulated objects such as humans from highly ambiguous
62
+
63
+ ![](images/796624f78ada10eb9acaa0c9f6d529c7af45c953bbb3e87c5743995b59f742b4.jpg)
64
+ Figure 1: Human mesh recovery in an ambiguous setting. We propose a novel method that, given an occluded input image of a person, outputs the set of meshes which constitute plausible human bodies that are consistent with the partial view. The ambiguous poses are predicted using a novel $n$ -quantized-best-of- $M$ method.
65
+
66
+ image data, often containing significant occlusions of the object. Clearly, it is generally impossible to reconstruct the object uniquely if too much evidence is missing; however, we can still predict a set containing all possible reconstructions (see fig. 1), making this set as small as possible. While ambiguous pose reconstruction has been previously investigated, as far as we know, this is the first paper that looks specifically at a deep learning approach for ambiguous reconstructions of the full human mesh.
67
+
68
+ Our primary contribution is to introduce a principled multi-hypothesis framework to model the ambiguities in monocular pose recovery. In the literature, such multiple-hypotheses networks are often trained with a so-called best-of- $M$ loss — namely, during training, the loss is incurred only by the best of the $M$ hypothesis, back-propagating gradients from that alone [12]. In this work we opt for the best-of- $M$ approach since it has been shown to outperform alternatives (such as variational auto-encoders or mixture density networks) in tasks that are similar to our 3D human pose recovery, and which have constrained output spaces [34].
69
+
70
+ A major drawback of the best-of- $M$ approach is that it only guarantees that one of the hypotheses lies close to the correct solution; however, it says nothing about the plausibility, or lack thereof, of the other $M - 1$ hypotheses, which can be arbitrarily 'bad'.<sup>2</sup> Not only does this mean that most of the hypotheses may be uninformative, but in an application we are also unable to tell which hypothesis should be used, and we might very well pick a 'bad' one. This has also a detrimental effect during learning because it makes gradients sparse as prediction errors are back-propagated only through one of the $M$ hypotheses for each training image.
71
+
72
+ In order to address these issues, our first contribution is a hypothesis reprojection loss that forces each member of the multi-hypothesis set to correctly reproduce to 2D image keypoint annotations. The main benefit is to constrain the whole predicted set of meshes to be consistent with the observed image, not just the best hypothesis, also addressing gradient sparsity.
73
+
74
+ Next, we observe that another drawback of the best-of- $M$ pipelines is to be tied to a particular value of $M$ , whereas in applications we are often interested in tuning the num
75
+
76
+ ber of hypothesis considered. Furthermore, minimizing the reprojection loss makes hypotheses geometrically consistent with the observation, but not necessarily likely. Our second contribution is thus to improve the flexibility of best-of- $M$ models by allowing them to output any smaller number
77
+
78
+ ![](images/f92cb5ed144c4b9a58c29e6e4a1e465f23dd3b2833d8d392ce1b6ee36648623a.jpg)
79
+ Figure 2: Top: Pretrained SPIN model tested on an ambiguous example, Bottom: SPIN model after fine-tuning to ambiguous examples. Note the network tends to regress to the mean over plausible poses, shown by predicting the missing legs vertically downward — arguably the average position over the training dataset.
80
+
81
+ $n < M$ of hypotheses while at the same time making these hypotheses more representative of likely poses. The new method, which we call $n$ -quantized-best-of- $M$ , does so by quantizing the best-of- $M$ model to output weighed by a explicit pose prior, learned by means of normalizing flows.
82
+
83
+ To summarise, our key contributions are as follows. First, we deal with the challenge of 3D mesh reconstruction for articulated objects such as humans in ambiguous scenarios. Second, we introduce a $n$ -quantized-best-of- $M$ mechanism to allow best-of- $M$ models to generate an arbitrary number of $n < M$ predictions. Third, we introduce a mode-wise re-projection loss for multi-hypothesis prediction, to ensure that predicted hypotheses are all consistent with the input.
84
+
85
+ Empirically, we achieve state-of-the-art monocular mesh recovery accuracy on Human36M, its more challenging version augmented with heavy occlusions, and the 3DPW datasets. Our ablation study validates each of our modelling choices, demonstrating their positive effect.
86
+
87
+ # 2 Related work
88
+
89
+ There is ample literature on recovering the pose of 3D models from images. We break this into five categories: methods that reconstruct 3D points directly, methods that reconstruct the parameters of a 3D model of the object via optimization, methods that do the latter via learning-based regression, hybrid methods and methods which deal with uncertainty in 3D human reconstruction.
90
+
91
+ Reconstructing 3D body points without a model. Several papers have focused on the problem of estimating 3D body points from 2D observations [3, 29, 33, 41, 20]. Of these, Martinez et al. [27] introduced a particularly simple pipeline based on a shallow neural network. In this work, we aim at recovering the full 3D surface of a human body, rather than only lifting sparse keypoints.
92
+
93
+ Fitting 3D models via direct optimization. Several methods fit the parameters of a 3D model such as SMPL [25] or SCAPE [3] to 2D observations using an optimization algorithm to iteratively improve the fitting quality. While early approaches such as [10, 37] required some manual intervention, the SMPLify method of Bogo et al. [5] was perhaps the first to fit SMPL to 2D keypoints fully automatically. SMPL was then extended to use silhouette, multiple views, and multiple people in [21, 13, 48]. Recent optimization methods such as [16, 32, 46] have significantly increased the scale of the models and data that can be handled.
94
+
95
+ Fitting 3D models via learning-based regression. More recently, methods have focused on regressing the parameters of the 3D models directly, in a feed-forward manner, generally by learning a deep neural network [42, 43, 30, 31, 17]. Due to the scarcity of 3D ground truth data for humans in the wild, most of these methods train a deep regressor using a mix of datasets with 3D and 2D annotations in form of 3D MoCap markers, 2D keypoints and silhouettes. Among those, HMR of Kanazawa et al. [17] and GraphCMR of Kolotouros et al. [20] stand out as particularly effective.
96
+
97
+ Hybrid methods. Other authors have also combined optimization and learning-based regression methods. In most cases, the integration is done by using a deep regressor to initialize the optimization algorithm [37, 21, 33, 31, 44]. However, recently Kolotouros et al. [19] has shown strong results by integrating the optimization loop in learning the deep neural network that performs the regression, thereby exploiting the weak cues available in 2D keypoints.
98
+
99
+ Modelling ambiguities in 3D human reconstruction. Several previous papers have looked at the problem of modelling ambiguous 3D human pose reconstructions. Early work includes Sminchisescu and Triggs [39], Sidenbladh et al. [36] and Sminchisescu et al. [38].
100
+
101
+ More recently, Akhter and Black [1] learn a prior over human skeleton joint angles (but not directly a prior on the SMPL parameters) from a MoCap dataset. Li and Lee [22] use the Mixture Density Networks model of [4] to capture ambiguous 3D reconstructions of sparse human body keypoints directly in physical space. Sharma et al. [35] learn a conditional variational auto-encoder to model ambiguous reconstructions as a posterior distribution; they also propose two scoring methods to extract a single 3D reconstruction from the distribution.
102
+
103
+ Cheng et al. [7] tackle the problem of video 3D reconstruction in the presence of occlusions, and show that temporal cues can be used to disambiguate the solution. While our method is similar in the goal of correctly handling the prediction uncertainty, we differ by applying our method to predicting full mesh of the human body. This is arguably a more challenging scenario due to the increased complexity of the desired 3D shape.
104
+
105
+ ![](images/c8b339bc13f2b277aed6d222a939f600fd3189eb47082a7e8d582b88a1c5a441.jpg)
106
+ Figure 3: Overview of our method. Given a single image of a human, during training, our method produces multiple skeleton hypotheses $\{\hat{X}^i\}_{i=1}^M$ that enter a Best-of- $M$ loss which selects the representative $\hat{X}^{m^*}$ which most accurately matches the ground truth control joints $X$ . At test time, we sample an arbitrary number of $n < M$ hypotheses by quantizing the set $\{\hat{X}^i\}$ that is assumed to be sampled from the probability distribution $p(X|I)$ modeled with normalizing flow $f$ .
107
+
108
+ Finally, some recent concurrent works also consider building priors over 3D human pose using normalizing flows. Xu et al. [47] release a prior for their new GHUM/GHUML model, and Zanfir et al. [49] build a prior on SMPL joint angles to constrain their weakly-supervised network. Our method differs as we learn our prior on 3D SMPL joints.
109
+
110
+ # 3 Preliminaries
111
+
112
+ Before discussing our method, we describe the necessary background, starting from SMPL.
113
+
114
+ SMPL. SMPL is a model of the human body parameterized by axis-angle rotations $\theta \in \mathbb{R}^{69}$ of 23 body joints, the shape coefficients $\beta \in \mathbb{R}^{10}$ modelling shape variations, and a global rotation $\gamma \in \mathbb{R}^3$ . SMPL defines a skinning function $S: (\theta, \beta, \gamma) \mapsto V$ that maps the body parameters to the vertices $V \in \mathbb{R}^{6890 \times 3}$ of a 3D mesh.
115
+
116
+ Predicting the SMPL parameters from a single image. Given an image $\mathbf{I}$ containing a person, the goal is to recover the SMPL parameters $(\theta, \beta, \gamma)$ that provide the best 3D reconstruction of it. Existing algorithms [18] cast this as learning a deep network $G(I) = (\theta, \beta, \gamma, t)$ that predicts the SMPL parameters as well as the translation $t \in \mathbb{R}^3$ of the perspective camera observing the person. We assume a fixed set of camera parameters. During training, the camera is used to constrain the reconstructed 3D mesh and the annotated 2D keypoints to be consistent. Since most datasets only contain annotations for a small set of keypoints ([11] is an exception), and since these keypoints do not correspond directly to any of the SMPL mesh vertices, we need a mechanism to translate between them. This mechanism is a fixed linear regressor $J: V \mapsto X$ that maps the SMPL mesh vertices $V = S(G(I))$ to the 3D locations $X = J(V) = J(S(G(I)))$ of the $K$ joints. Then, the projections $\pi_t(X)$ of the 3D joint positions into image $\mathbf{I}$ can be compared to the available 2D annotations.
117
+
118
+ Normalizing flows. The idea of normalizing flows (NF) is to represent a complex distribution $p(X)$ on a random variable $X$ as a much simpler distribution $p(z)$ on a transformed version $z = f(X)$ of $X$ . The transformation $f$ is learned so that $p(z)$ has a fixed shape, usually a Normal $p(z) \sim \mathcal{N}(0,1)$ . Furthermore, $f$ itself must be invertible and smooth. In this paper, we utilize a particular version of NF dubbed RealNVP [8]. A more detailed explanation of NF and RealNVP has been deferred to the supplementary.
119
+
120
+ # 4 Method
121
+
122
+ We start from a neural network architecture that implements the function $G(I) = (\theta, \beta, \gamma, t)$ described above. As shown in SPIN [19], the HMR [18] architecture attains state-of-the-art results
123
+
124
+ for this task, so we use it here. However, the resulting regressor $G(I)$ , given an input image $I$ , can only produce a single unique solution. In general, and in particular for cases with a high degree of reconstruction ambiguity, we are interested in predicting set of plausible 3D poses rather than a single one. We thus extend our model to explicitly produce a set of $M$ different hypotheses $G_{m}(I) = (\theta_{m},\beta_{m},\gamma_{m},t_{m})$ , $m = 1,\dots ,M$ . This is easily achieved by modifying the HMR's final output layer to produce a tensor $M$ times larger, effectively stacking the hypotheses. In what follows, we describe the learning scheme that drives the monocular predictor $G$ to achieve an optimal coverage of the plausible poses consistent with the input image. Our method is summarized in fig. 3.
125
+
126
+ # 4.1 Learning with multiple hypotheses
127
+
128
+ For learning the model, we assume to have a training set of $N$ images $\{I_i\}_{i = 1,\dots ,N}$ , each cropped around a person. Furthermore, for each training image $I_{i}$ we assume to know (1) the 2D location $Y_{i}$ of the body joints (2) their 3D location $X_{i}$ , and (3) the ground truth SMPL fit $(\theta_{i},\beta_{i},\gamma_{i})$ . Depending on the set up, some of these quantities can be inferred from the others (e.g. we can use the function $J$ to convert the SMPL parameters to the 3D joints $X_{i}$ and then the camera projection to obtain $Y_{i}$ ).
129
+
130
+ Best-of- $M$ loss. Given a single input image, our network predicts a set of poses, where at least one should be similar to the ground truth annotation $X_{i}$ . This is captured by the best-of- $M$ loss [12]:
131
+
132
+ $$
133
+ \mathcal {L} _ {\text {b e s t}} (J, G; m ^ {*}) = \frac {1}{N} \sum_ {i = 1} ^ {N} \left\| X _ {i} - \hat {X} ^ {m _ {i} ^ {*}} \left(I _ {i}\right) \right\|, \quad m _ {i} ^ {*} = \underset {m = 1, \dots , M} {\operatorname {a r g m i n}} \left\| X _ {i} - \hat {X} ^ {m} \left(I _ {i}\right) \right\|, \tag {1}
134
+ $$
135
+
136
+ where $\hat{X}^m (I_i) = J(G_m(V(I_i)))$ are the 3D joints estimated by the $m$ -th SMPL predictor $G_{m}(I_{i})$ applied to image $I_{i}$ . In this way, only the best hypothesis is steered to match the ground truth, leaving the other hypotheses free to sample the space of ambiguous solutions. During the computation of this loss, we also extract the best index $m_i^*$ for each training example.
137
+
138
+ Limitations of best-of- $M$ . As noted in section 1, best-of- $M$ only guarantees that one of the $M$ hypotheses is a good solution, but says nothing about the other ones. Furthermore, in applications we are often interested in modulating the number of hypotheses generated, but the best-of- $M$ regressor $G(I)$ only produces a fixed number of output hypothesis $M$ , and changing $M$ would require retraining from scratch, which is intractable.
139
+
140
+ We first address these issues by introducing a method that allows us to train a best-of- $M$ model for a large $M$ once and leverage it later to generate an arbitrary number of $n < M$ hypotheses without the need of retraining, while ensuring that these are good representatives of likely body poses.
141
+
142
+ $n$ -quantized-best-of- $M$ Formally, given a set of $M$ predictions $\hat{\mathcal{X}}^M(I) = \{\hat{X}^1(I), \dots, \hat{X}^M(I)\}$ we seek to generate a smaller $n$ -sized set $\bar{\mathcal{X}}^n(I) = \{\bar{X}^1(I), \dots, \bar{X}^n(I)\}$ which preserves the information contained in $\hat{\mathcal{X}}^M$ . In other words, $\hat{\mathcal{X}}^n$ optimally quantizes $\hat{\mathcal{X}}^M$ . To this end, we interpret the output of the best-of- $M$ model as a set of choices $\hat{\mathcal{X}}^M(I)$ for the possible pose. These poses are of course not all equally likely, but it is difficult to infer their probability from (1). We thus work with the following approximation. We consider the prior $p(X)$ on possible poses (defined in the next section), and set:
143
+
144
+ $$
145
+ p (X | I) = p \left(X | \hat {\mathcal {X}} ^ {M} (I)\right) = \sum_ {i = 1} ^ {M} \delta \left(X - \hat {X} ^ {i} (I)\right) \frac {p \left(\hat {X} ^ {i} (I)\right)}{\sum_ {k = 1} ^ {M} p \left(\hat {X} ^ {k} (I)\right)}. \tag {2}
146
+ $$
147
+
148
+ This amounts to using the best-of- $M$ output as a conditioning set (i.e. an unweighted selection of plausible poses) and then use the prior $p(x)$ to weight the samples in this set. With the weighted samples, we can then run $K$ -means [24] to further quantize the best-of- $M$ output while minimizing the quantization energy $E$ :
149
+
150
+ $$
151
+ E (\bar {\mathcal {X}} | \hat {\mathcal {X}}) = \mathbb {E} _ {p (X | I)} \left[ \min _ {\{\bar {X} ^ {1}, \dots , \bar {X} ^ {n} \}} \| X - \bar {X} ^ {j} \| ^ {2} \right] = \sum_ {i = 1} ^ {M} \frac {p \left(\hat {X} ^ {i} (I)\right)}{\sum_ {k = 1} ^ {M} p \left(\hat {X} ^ {k} (I)\right)} \min _ {\{\bar {X} ^ {1}, \dots , \bar {X} ^ {n} \}} \| \hat {X} ^ {i} (I) - \bar {X} ^ {j} \| ^ {2}. \tag {3}
152
+ $$
153
+
154
+ This can be done efficiently on GPU — for our problem, K-Means consumes less than $20\%$ of the execution time of the entire forward pass of our method.
155
+
156
+ ![](images/b9c0ac25b0e6f462f8014f807fbab3784b5f367285da9c84c7f0c9b4f5d2a3cf.jpg)
157
+ Figure 4: Example samples from the normalizing flow $f: X \mapsto z$ ; $p(z) \sim \mathcal{N}(0,1)$ , trained on a dataset of ground truth 3D SMPL control skeletons $\{X_1, \ldots, X_N\}$ .
158
+
159
+ Learning the pose prior with normalizing flows. In order to obtain $p(X)$ , we propose to learn a normalizing flow model in form of the RealNVP network $f$ described in section 3 and the supplementary. RealNVP optimizes the log likelihood $\mathcal{L}_{\mathrm{nf}}(f)$ of training ground truth 3D skeletons $\{X_1, \ldots, X_N\}$ annotated in their corresponding images $\{I_1, \ldots, I_N\}$ :
160
+
161
+ $$
162
+ \mathcal {L} _ {\mathrm {n f}} (f) = - \frac {1}{N} \sum_ {i = 1} ^ {N} \log p \left(X _ {i}\right) = - \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\log \mathcal {N} \left(f \left(X _ {i}\right)\right) - \sum_ {l = 1} ^ {L} \log \left| \frac {d f _ {l} \left(X _ {l i}\right)}{d X _ {l i}} \right|\right). \tag {4}
163
+ $$
164
+
165
+ 2D re-projection loss. Since the best-of- $M$ loss optimizes a single prediction at a time, often some members of the ensemble $\hat{\mathcal{X}}(I)$ drift away from the manifold of plausible human body shapes, ultimately becoming 'dead' predictions that are never selected as the best hypothesis $m^*$ . In order to prevent this, we further utilize a re-projection loss that acts across all hypotheses for a given image. More specifically, we constrain the set of 3D reconstructions to lie on projection rays passing through the 2D input keypoints with the following hypothesis re-projection loss:
166
+
167
+ $$
168
+ \mathcal {L} _ {\mathrm {r i}} (J, G) = \frac {1}{N} \sum_ {i = 1} ^ {N} \sum_ {m = 1} ^ {M} \left\| Y _ {i} - \pi_ {t _ {i}} \left(\hat {X} ^ {m} (I)\right) \right\|. \tag {5}
169
+ $$
170
+
171
+ Note that many of our training images exhibit significant occlusion, so $Y$ may contain invisible or missing points. We handle this by masking $\mathcal{L}_{\mathrm{ri}}$ to prevent these points contributing to the loss.
172
+
173
+ SMPL loss. The final loss terms, introduced by prior work [18, 31, 19], penalize deviations between the predicted and ground truth SMPL parameters. For our method, these are only applied to the best hypothesis $m_{i}^{*}$ found above:
174
+
175
+ $$
176
+ \mathcal {L} _ {\theta} (G; m ^ {*}) = \frac {1}{N} \sum_ {i = 1} ^ {N} \| \theta_ {i} - G _ {\theta , m _ {i} ^ {*}} (I _ {i}) \|; \mathcal {L} _ {V} (G; m ^ {*}) = \frac {1}{N} \sum_ {i = 1} ^ {N} \| S (\theta_ {i}, \beta_ {i}, \gamma_ {i}) - S (G _ {(\theta , \beta , \gamma), m _ {i} ^ {*}} (I _ {i})) \| \tag {6}
177
+ $$
178
+
179
+ $$
180
+ \mathcal {L} _ {\beta} (G; m ^ {*}) = \frac {1}{N} \sum_ {i = 1} ^ {N} \| \beta_ {i} - G _ {\beta , m _ {i} ^ {*}} (I _ {i}) \|; \mathcal {L} _ {\mathrm {r b}} (G; m ^ {*}) = \frac {1}{N} \sum_ {i = 1} ^ {N} \| Y _ {i} - \pi_ {t _ {i}} \left(\hat {X} ^ {m _ {i} ^ {*}} (I _ {i})\right) \| \tag {7}
181
+ $$
182
+
183
+ Note here we use $\mathcal{L}_{\mathrm{rb}}$ to refer to a 2D re-projection error between the best hypothesis and ground truth 2D points $Y_{i}$ . This differs from the earlier loss $\mathcal{L}_{\mathrm{ri}}$ , which is applied across all modes to enforce consistency to the visible input points. Note that we could have used eqs. (6) and (7) to select the best hypothesis $m_{i}^{*}$ , but it would entail an unmanageable memory footprint due to the requirement of SMPL-meshing for every hypothesis before the best-of- $M$ selection.
184
+
185
+ Overall loss. The model is thus trained to minimize:
186
+
187
+ $$
188
+ \begin{array}{l} \mathcal {L} (J, G) = \lambda_ {\mathrm {r i}} \mathcal {L} _ {\mathrm {r i}} (J, G) + \lambda_ {\mathrm {b e s t}} \mathcal {L} _ {\mathrm {b e s t}} (J, G; m ^ {*}) + \lambda_ {\theta} \mathcal {L} _ {\theta} (J, G; m ^ {*}) \\ + \lambda_ {\beta} \mathcal {L} _ {\beta} (J, G; m ^ {*}) + \lambda_ {\mathrm {V}} \mathcal {L} _ {V} (J, G; m ^ {*}) + \lambda_ {\mathrm {r b}} \mathcal {L} _ {\mathrm {r b}} (J, G; m ^ {*}) \tag {8} \\ \end{array}
189
+ $$
190
+
191
+ where $m^{*}$ is given in eq. (1) and $\lambda_{\mathrm{ri}}, \lambda_{\mathrm{best}}, \lambda_{\theta}, \lambda_{\beta}, \lambda_{\mathrm{V}}, \lambda_{\mathrm{rb}}$ are weighing factors. We use a consistent set of SMPL loss weights across all experiments $\lambda_{\mathrm{best}} = 25.0, \lambda_{\theta} = 1.0, \lambda_{\beta} = 0.001, \lambda_{\mathrm{V}} = 1.0,$ and set $\lambda_{\mathrm{ri}} = 1.0$ . Since the training of the normalizing flow $f$ is independent of the rest of the model, we train $f$ separately by optimizing $\mathcal{L}_{\mathrm{nf}}$ with the weight of $\lambda_{\mathrm{nf}} = 1.0$ . Samples from our trained normalizing flow are shown in fig. 4
192
+
193
+ # 5 Experiments
194
+
195
+ In this section we compare our method to several strong baselines. We start by describing the datasets and the baselines, followed by a quantitative and a qualitative evaluation.
196
+
197
+ Table 1: Monocular multi-hypothesis human mesh recovery comparing our approach to two multi-hypothesis baselines (SMPL-CVAE, SMPL-MDN) and state-of-the-art single mode evaluation models [19, 20, 17] on Human3.6m (H36M), its ambiguous version AH36M, on 3DPW and its ambiguous version A3DPW.
198
+
199
+ <table><tr><td rowspan="2">Dataset</td><td>Quantization n</td><td colspan="2">1</td><td colspan="2">5</td><td colspan="2">10</td><td colspan="2">25</td></tr><tr><td>Metric</td><td>MPJPE</td><td>RE</td><td>MPJPE</td><td>RE</td><td>MPJPE</td><td>RE</td><td>MPJPE</td><td>RE</td></tr><tr><td rowspan="6">H36M</td><td>HMR [17]</td><td>—</td><td>56.8</td><td>—</td><td>—</td><td>—</td><td>—</td><td>—</td><td>—</td></tr><tr><td>GraphCMR [20]</td><td>71.9</td><td>50.1</td><td>—</td><td>—</td><td>—</td><td>—</td><td>—</td><td>—</td></tr><tr><td>SPIN [19]</td><td>62.2</td><td>41.8</td><td>—</td><td>—</td><td>—</td><td>—</td><td>—</td><td>—</td></tr><tr><td>SMPL-MDN</td><td>64.4</td><td>44.8</td><td>61.8</td><td>43.3</td><td>61.3</td><td>43.0</td><td>61.1</td><td>42.7</td></tr><tr><td>SMPL-CVAE</td><td>70.1</td><td>46.7</td><td>68.9</td><td>46.4</td><td>68.6</td><td>46.3</td><td>68.1</td><td>46.2</td></tr><tr><td>Ours</td><td>61.5</td><td>41.6</td><td>59.8</td><td>42.0</td><td>59.2</td><td>42.2</td><td>58.2</td><td>42.2</td></tr><tr><td rowspan="6">3DPW</td><td>HMR [17]</td><td>—</td><td>81.3</td><td>—</td><td>—</td><td>—</td><td>—</td><td>—</td><td>—</td></tr><tr><td>GraphCMR [20]</td><td>—</td><td>70.2</td><td>—</td><td>—</td><td>—</td><td>—</td><td>—</td><td>—</td></tr><tr><td>SPIN [19]</td><td>96.9</td><td>59.3</td><td>—</td><td>—</td><td>—</td><td>—</td><td>—</td><td>—</td></tr><tr><td>SMPL-MDN</td><td>105.8</td><td>64.7</td><td>96.9</td><td>61.2</td><td>95.9</td><td>60.7</td><td>94.9</td><td>60.1</td></tr><tr><td>SMPL-CVAE</td><td>96.3</td><td>61.4</td><td>93.7</td><td>60.7</td><td>92.9</td><td>60.5</td><td>92.0</td><td>60.3</td></tr><tr><td>Ours</td><td>93.8</td><td>59.9</td><td>82.2</td><td>57.1</td><td>79.4</td><td>56.6</td><td>75.8</td><td>55.6</td></tr><tr><td rowspan="3">AH36M</td><td>SMPL-MDN</td><td>113.9</td><td>74.7</td><td>98.0</td><td>70.8</td><td>95.1</td><td>69.9</td><td>91.5</td><td>69.5</td></tr><tr><td>SMPL-CVAE</td><td>114.5</td><td>76.5</td><td>111.5</td><td>75.7</td><td>110.6</td><td>75.4</td><td>109.7</td><td>75.1</td></tr><tr><td>Ours</td><td>103.6</td><td>67.8</td><td>96.4</td><td>67.1</td><td>93.5</td><td>66.0</td><td>90.0</td><td>64.2</td></tr><tr><td rowspan="3">A3DPW</td><td>SMPL-MDN</td><td>159.7</td><td>82.8</td><td>154.6</td><td>83.0</td><td>149.6</td><td>80.7</td><td>122.1</td><td>76.6</td></tr><tr><td>SMPL-CVAE</td><td>156.6</td><td>80.2</td><td>154.5</td><td>79.9</td><td>153.9</td><td>79.8</td><td>153.1</td><td>79.8</td></tr><tr><td>Ours</td><td>149.6</td><td>78.5</td><td>125.6</td><td>74.4</td><td>116.7</td><td>73.7</td><td>107.8</td><td>72.1</td></tr></table>
200
+
201
+ Table 2: Ablation study on 3DPW removing either the normalizing flow or the mode re-projection losses and reporting the change in performance.
202
+
203
+ <table><tr><td colspan="2">Quantization n</td><td colspan="2">5</td><td colspan="2">10</td><td colspan="2">25</td></tr><tr><td>Mode reproj.</td><td>Flow weight</td><td>MPJPE</td><td>RE</td><td>MPJPE</td><td>RE</td><td>MPJPE</td><td>RE</td></tr><tr><td></td><td></td><td>86.4</td><td>57.9</td><td>84.0</td><td>57.5</td><td>79.0</td><td>56.3</td></tr><tr><td></td><td>✓</td><td>84.1</td><td>57.0</td><td>81.9</td><td>56.7</td><td>77.8</td><td>55.8</td></tr><tr><td>✓</td><td></td><td>82.7</td><td>57.5</td><td>79.9</td><td>57.0</td><td>76.2</td><td>55.9</td></tr><tr><td>✓</td><td>✓</td><td>82.2</td><td>57.1</td><td>79.4</td><td>56.6</td><td>75.8</td><td>55.6</td></tr></table>
204
+
205
+ Datasets and evaluation protocol. Our man3.6m (H36M) [14, 6] and 3DPW data largest datasets of humans annotated wi As common practice, we train on subjects S1, S5, S6, S7 and S8, and test on S9 and S11. 3DPW is only used for evaluation and, following [20], we evaluate on its test set.
206
+
207
+ Our evaluation is consistent with [19, 20] - we report two metrics that compare the lifted dense 3D SMPL shape to the ground truth mesh: Mean Per Joint Position Error (MPJPE), Reconstruction Error (RE). For H36M, all errors are computed using an evaluation scheme known as "Protocol #2". Please refer to supplementary for a detailed explanation of MPJPE and RE.
208
+
209
+ Multipose metrics. MPJPE and RE are traditional metrics that assume a single correct ground truth pre
210
+
211
+ diction for a given 2D observation. As mentioned above, such an assumption is rarely correct due to the inherent ambiguity of the monocular 3D shape estimation task. We thus also report MPJPE-
212
+
213
+ Figure 5: Example image and corresponding annotation from the ambiguous H36M dataset AH36M. Best viewed in colour.
214
+ ![](images/fc66a1473b50d349dbaae9d4c833ec7a978e45e3bfa4ffb4c4dff243555e6357.jpg)
215
+ s on the HuM is one of the MoCap sensors.
216
+
217
+ ![](images/bf528aa0561651450cb56b7bcb86831587f08557016d16d438decfcbd0c771be.jpg)
218
+
219
+ $n / \mathrm{RE} - n$ an extension of MPJPE RE used in [22], that enables an evaluation of $n$ different shape hypotheses. In more detail, to evaluate an algorithm, we allow it to output $n$ possible predictions and, out of this set, we select the one that minimizes the MPJPE/RE metric. We report results for $n\in \{1,5,10,25\}$ .
220
+
221
+ Ambiguous H36M/3DPW (AH36M/A3DPW). Since H36M is captured in a controlled environment, it rarely depicts challenging real-world scenarios such as body occlusions that are the main source of ambiguity in the single-view 3D shape estimation problem.
222
+
223
+ Hence, we construct an adapted version of H36M with synthetically-generated occlusions (fig. 5) by randomly hiding a subset of the 2D keypoints and re-computing an image crop around the remaining visible joints. Please refer to the supplementary for details of the occlusion generation process.
224
+
225
+ While 3DPW does contain real scenes, for completeness, we also evaluate on a noisy, and thus more challenging version (A3DPW) generated according to the aforementioned strategy.
226
+
227
+ Baselines Our method is compared to two multi-pose prediction baselines. For fairness, both baselines extend the same (state-of-the-art) trunk architecture as we use, and all methods have access to the same training data.
228
+
229
+ SMPL-MDN follows [22] and outputs parameters of a mixture density model over the set of SMPL log-rotation pose parameters. Since a naive implementation of the MDN model leads to poor performance ( $\approx 200\mathrm{mm}$ MPJPE- $n = 5$ on H36M), we introduced several improvements that allow optimization of the total loss eq. (8). SMPL-CVAE, the second baseline, is a conditional variational autoencoder [40] combined with our trunk network. SMPL-CVAE consists of an encoding network that maps a ground truth SMPL mesh $V$ to a gaussian vector $z$ which is fed together with an encoding of the image to generate a mesh $V'$ such that $V' \approx V$ . At test time, we sample $n$ plausible human meshes by drawing $z \sim \mathcal{N}(0,1)$ to evaluate with MPJPE- $n$ /RE- $n$ . More details of both SMPL-CVAE and SMPL-MDN have been deferred to the supplementary material.
230
+
231
+ For completeness, we also compare to three more baselines that tackle the standard single-mesh prediction problem: HMR [17], GraphCMR [31], and SPIN [19], where the latter currently attain state-of-the-art performance on H36M/3DPW. All methods were trained on H36M [14], MPI-INF-3DHP [28], LSP [15], MPII [2] and COCO [23].
232
+
233
+ # 5.1 Results
234
+
235
+ Table 1 contains a comprehensive summary of the results on all 3 benchmarks. Our method outperforms the SMPL-CVAE and SMPL-MDN in all metrics on all datasets. For SMPL-CVAE, we found that the encoding network often "cheats" during training by transporting all information about the ground truth, instead of only encoding the modes of ambiguity. The reason for a lower performance of SMPL-MDN is probably the representation of the probability in the space of log-rotations, rather in the space of vertices. Modelling the MDN in the space of model vertices would be more convenient due to being more relevant to the final evaluation metric that aggregates per-vertex errors, however, fitting such high-dimensional $(\dim = 6890\times 3)$ Gaussian mixture is prohibitively costly.
236
+
237
+ Furthermore, it is very encouraging to observe that our method is also able to outperform the single-mode baselines [17, 20, 19] on the single mode MPJPE on both H36M and 3DPW. This comes as a surprise since our method has not been optimized for this mode of operation. The difference is more significant for 3DPW which probably happens because 3DPW is not used for training and, hence, the normalizing flow prior acts as an effective filter of predicted outlier poses. Qualitative results are shown in fig. 6.
238
+
239
+ Ablation study. We further conduct an ablative study on 3DPW that removes components of our method and measures the incurred change in performance. More specifically, we: 1) ablate the hypothesis reprojection loss; 2) set $p(X|I) =$ Uniform in eq. (3), effectively removing the normalizing flow component and executing unweighted K-Means in $n$ -quantized-best-of- $M$ . Table 2 demonstrates that removing both contributions decreases performance, validating our design choices.
240
+
241
+ # 6 Conclusions
242
+
243
+ In this work, we have explored a seldom visited problem of representing the set of plausible 3D meshes corresponding to a single ambiguous input image of a human. To this end, we have pro
244
+
245
+ ![](images/b85d3e48ce3b889981c47b8b1d3f70cbd9a1e92613738f1152292a72ddd4a3ce.jpg)
246
+ Figure 6: Qualitative results from $n = 5$ quantization on monocular mesh recovery on AH36m and A3DPW. From left to right, each group of figures depicts the input ambiguous image, five network hypotheses with the closest to the ground truth in blue, and the ground truth pose in green.
247
+
248
+ posed a novel method that trains a single multi-hypothesis best-of- $M$ model and, using a novel $n$ -quantized-best-of- $M$ strategy, allows to sample an arbitrary number $n < M$ of hypotheses.
249
+
250
+ Importantly, this proposed quantization technique leverages a normalizing flow model, that effectively filters out the predicted hypotheses that are unnatural. Empirical evaluation reveals performance superior to several strong probabilistic baselines on Human36M, its challenging ambiguous version, and on 3DPW. Our method encounters occasional failure cases, such as when tested on individuals with unusual shape (e.g. obese people), since we have very few of these examples in the training set. Tackling such cases would make for interesting and worthwhile future work.
251
+
252
+ Acknowledgements The authors would like to thank Richard Turner for useful technical discussions relating to normalizing flows, and Philippa Liggins, Thomas Roddick and Nicholas Biggs for proof reading. This work was entirely funded by Facebook AI Research.
253
+
254
+ # Broader impact
255
+
256
+ Our method improves the ability of machines to understand human body poses in images and videos. Understanding people automatically may arguably be misused by bad actors. However, importantly, our method is not a form of biometric as it does not allow the identification of people. Rather, only their overall body shape and pose is reconstructed, but these details are insufficient for unique identification. In particular, individual facial features are not reconstructed at all.
257
+
258
+ Furthermore, our method is an improvement of existing capabilities, but does not introduce a radical new capability in machine learning. Thus our contribution is unlikely to facilitate misuse of technology which is already available to anyone.
259
+
260
+ Finally, any potential negative use of a technology should be balanced against positive uses. Understanding body poses has many legitimate applications in VR and AR, medical, assistance to the elderly, assistance to the visual impaired, autonomous driving, human-machine interactions, image and video categorization, platform integrity, etc.
261
+
262
+ # References
263
+
264
+ [1] I. Akhter and M. J. Black. Pose-conditioned joint angle limits for 3D human pose reconstruction. In Proc. CVPR, 2015.
265
+ [2] Mykhaylo Andriluka, Leonid Pishchulin, Peter Gehler, and Bernt Schiele. 2d human pose estimation: New benchmark and state of the art analysis. In Proc. CVPR, June 2014.
266
+ [3] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers, and J. Davis. SCAPE: shape completion and animation of people. In ACM Trans. on Graphics, 2005.
267
+ [4] C. M. Bishop. Mixture density networks. Technical report, Aston University, 1994.
268
+ [5] F. Bogo, A. Kanazawa, C. Lassner, P. Gehler, J. Romero, and M. J. Black. Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image. In Proc. ECCV, 2016.
269
+ [6] Cristian Sminchisescu Catalin Ionescu, Fuxin Li. Latent structured models for human pose estimation. In Proc. ICCV, 2011.
270
+ [7] Y. Cheng, B. Yang, B. Wang, W. Yan, and R. T. Tan. Occlusion-aware networks for 3d human pose estimation in video. In Proc. ICCV, 2019.
271
+ [8] L. Dinh, J. Sohl-Dickstein, and S. Bengio. Density estimation using Real NVP. In Proc. ICLR, 2017.
272
+ [9] Olivier Faugeras and Quang-Tuan Luong. The Geometry of Multiple Images. MIT Press, 2001.
273
+ [10] P. Guan, A. Weiss, A. O. Balan, and M. J. Black. Estimating human shape and pose from a single image. In Proc. ICCV, 2009.
274
+ [11] Riza Alp Güler, Natalia Neverova, and Iasonas Kokkinos. Densepose: Dense human pose estimation in the wild. In Proc. CVPR, pages 7297-7306, 2018.
275
+ [12] Abner Guzman-Rivera, Dhruv Batra, and Pushmeet Kohli. Multiple choice learning: Learning to produce multiple structured outputs. In Proc. NeurIPS, pages 1799-1807, 2012.
276
+ [13] Yinghao Huang, Federica Bogo, Christoph Lassner, Angjoo Kanazawa, Peter V. Gehler, Javier Romero, Ijaz Akhter, and Michael J. Black. Towards accurate marker-less human shape and pose estimation over time. In Proc. 3DV, 2017.
277
+ [14] Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. PAMI, 36(7):1325-1339, jul 2014.
278
+ [15] Sam Johnson and Mark Everingham. Learning effective human pose estimation from inaccurate annotation. In Proc. CVPR, 2011.
279
+ [16] Hanbyul Joo, Tomas Simon, and Yaser Sheikh. Total capture: A 3D deformation model for tracking faces, hands, and bodies. In Proc. CVPR, 2018.
280
+ [17] Angjoo Kanazawa, Michael J. Black, David W. Jacobs, and Jitendra Malik. End-to-end recovery of human shape and pose. In Proc. CVPR, 2018.
281
+ [18] Angjoo Kanazawa, Shubham Tulsiani, Alexei A. Efros, and Jitendra Malik. Learning category-specific mesh reconstruction from image collections. In Proc. ECCV, 2018.
282
+ [19] Nikos Kolotouros, Georgios Pavlakos, Michael J. Black, and Kostas Daniilidis. Learning to reconstruct 3D human pose and shape via model-fitting in the loop. In Proc. ICCV, 2019.
283
+ [20] Nikos Kolotouros, Georgios Pavlakos, and Kostas Daniilidis. Convolutional mesh regression for single-image human shape reconstruction. In Proc. CVPR, 2019.
284
+ [21] Christoph Lassner, Javier Romero, Martin Kiefel, Federica Bogo, Michael J. Black, and Peter V. Gehler. Unite the people: Closing the loop between 3D and 2D human representations. In Proc. CVPR, 2017.
285
+ [22] C. Li and G. Hee Lee. Generating multiple hypotheses for 3d human pose estimation with mixture density network. In Proc. CVPR, 2019.
286
+ [23] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Proc. ECCV, 2014.
287
+
288
+ [24] Stuart Lloyd. Least squares quantization in pcm. IEEE transactions on information theory, 28(2):129-137, 1982.
289
+ [25] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black and. SMPL: A skinned multi- person linear model. ACM Trans. on Graphics, 2015.
290
+ [26] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J Black. SMPL: A skinned multi-person linear model. ACM transactions on graphics (TOG), 34(6):248, 2015.
291
+ [27] J. Martinez, J. Romero, M. Kiefel, F. Bogo, M. J. Black, and P. V. Gehler. A simple yet effective baseline for 3D human pose estimation. In Proc. CVPR, 2017.
292
+ [28] Dushyant Mehta, Helge Rhodin, Dan Casas, Pascal Fua, Oleksandr Sotnychenko, Weipeng Xu, and Christian Theobalt. Monocular 3d human pose estimation in the wild using improved cnn supervision. In Proc. 3DV. IEEE, 2017. doi: 10.1109/3dv.2017.00064. URL http://gvv.mpi-inf.mpg.de/3dhp_dataset.
293
+ [29] Dushyant Mehta, Srinath Sridhar, Oleksandr Sotnychenko, Helge Rhodin, Mohammad Shafiei, Hans-Peter Seidel, Weipeng Xu, Dan Casas, and Christian Theobalt. VNect: Real-time 3d human pose estimation with a single RGB camera. In Proc. SIGGRAPH, 2017.
294
+ [30] Mohamed Omran, Christoph Lassner, Gerard Pons-Moll, Peter V. Gehle, and Bernt Schiele. Neural body fitting: Unifying deep learning and model based human pose and shape estimation. In Proc. 3DV, 2018.
295
+ [31] Georgios Pavlakos, Luyang Zhu, Xiaowei Zhou, and Kostas Daniilidis. Learning to estimate 3D human pose and shape from a single color image. In Proc. CVPR, 2018.
296
+ [32] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed A. A. Osman, Dimitrios Tzionas, and Michael J. Black. Expressive body capture: 3D hands, face, and body from a single image. In Proc. CVPR, 2019.
297
+ [33] Grégory Rogez, Philippe Weinzaepfel, and Cordelia Schmid. LCR-Net++: Multi-person 2D and 3D pose detection in natural images. PAMI, 2018.
298
+ [34] Christian Rupprecht, Iro Laina, Robert DiPietro, Maximilian Baust, Federico Tombari, Nassir Navab, and Gregory D. Hager. Learning in an uncertain world: Representing ambiguity through multiple hypotheses. In Proc. ICCV, 2017.
299
+ [35] Saurabh Sharma, Pavan Teja Varigonda, Prashast Bindal, Abhishek Sharma, and Arjun Jain. Monocular 3d human pose estimation by generation and ordinal ranking. In Proc. ICCV, 2019.
300
+ [36] Hedvig Sidenbladh, Michael J. Black, and David J. Fleet. Stochastic tracking of 3d human figures using 2d image motion. In Proc. ECCV, ECCV '00, page 702-718, Berlin, Heidelberg, 2000. Springer-Verlag. ISBN 3540676864.
301
+ [37] Leonid Sigal, Alexandru Balan, and Michael J. Black. Combined discriminative and generative articulated pose and non-rigid shape estimation. In Proc. NeurIPS. 2008.
302
+ [38] C. Sminchisescu, Amit Kanaujia, Zhiguo Li, and Dimitris Metaxas. Discriminative density propagation for 3d human motion estimation. volume 1, pages 390-397 vol. 1, 07 2005. ISBN 0-7695-2372-2. doi: 10.1109/CVPR.2005.132.
303
+ [39] Cristian Sminchisescu and Bill Triggs. Kinematic jump processes for monocular 3d human tracking. In Proc. CVPR, CVPR'03, page 69-76, USA, 2003. IEEE Computer Society. ISBN 0769519008.
304
+ [40] Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In Proc. NeurIPS. 2015.
305
+ [41] X. Sun, B. Xiao, F. Wei, S. Liang, and Y. Wei. Integral human pose regression. In Proc. ECCV, 2018.
306
+ [42] V. Tan, I. Budvytis, and R. Cipolla. Indirect deep structured learning for 3D human body shape and pose prediction. In Proc. BMVC, 2017.
307
+ [43] Hsiao-Yu Fish Tung, Hsiao-Wei Tung, Ersin Yumer, and Katerina Fragkiadaki. Self-supervised learning of motion capture. In Proc. NeurIPS, 2017.
308
+ [44] G. Varol, D. Ceylan, B. Russel, J. Yang, E. Yumer, I. Laptev, and C. Schmid. BodyNet: Volumetric inference of 3D human body shapes. In Proc. ECCV, 2018.
309
+
310
+ [45] Timo von Marcard, Roberto Henschel, Michael Black, Bodo Rosenhahn, and Gerard Pons-Moll. Recovering accurate 3d human pose in the wild using imus and a moving camera. In Proc. ECCV, sep 2018.
311
+ [46] Donglai Xiang, Hanbyul Joo, and Yaser Sheikh. Monocular total capture: Posing face, body, and hands in the wild. In Proc. CVPR, 2019.
312
+ [47] Hongyi Xu, Eduard Gabriel Bazavan, Andrei Zanfir, William T. Freeman, Rahul Sukthankar, and Cristian Sminchisescu. Ghum ghuml: Generative 3d human shape and articulated pose models. In Proc. CVPR, June 2020.
313
+ [48] A. Zanfir, E. Marinoiu, and C. Sminchisescu. Monocular 3D pose and shape estimation of multiple people in natural scenes — the importance of multiple scene constraints. In Proc. CVPR, 2018.
314
+ [49] Andrei Zanfir, Eduard Gabriel Bazavan, Hongyi Xu, Bill Freeman, Rahul Sukthankar, and Cristian Sminchisescu. Weakly supervised 3d human pose and shape reconstruction with normalizing flows. In Proc. ECCV, 2020.
3dmultibodiesfittingsetsofplausible3dhumanmodelstoambiguousimagedata/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a31453213fc795e51224f98710f2832819b11ef3119f3de8a9cd84f05b4f3fb
3
+ size 498891
3dmultibodiesfittingsetsofplausible3dhumanmodelstoambiguousimagedata/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f172a9b86807dda4b38f012f7c5871bd6de09503c026243ff8f4c922384e78f
3
+ size 425373
3dselfsupervisedmethodsformedicalimaging/16d69717-ef6c-44ce-8db2-1247c38adfe6_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a57f8b4bc96f10a3162d23094c89f0ded9a998ab62eeef6cee51e9b398946035
3
+ size 81389
3dselfsupervisedmethodsformedicalimaging/16d69717-ef6c-44ce-8db2-1247c38adfe6_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28e2f5fae8826db86cb1783a8c0045465569d1af7a162af29393349ec52cdf09
3
+ size 103643
3dselfsupervisedmethodsformedicalimaging/16d69717-ef6c-44ce-8db2-1247c38adfe6_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7ca0083d651fd1f650b2bd03bb3dfe163eb1f160e9f00473a2a7f97632fe2fb
3
+ size 601044
3dselfsupervisedmethodsformedicalimaging/full.md ADDED
@@ -0,0 +1,246 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3D Self-Supervised Methods for Medical Imaging
2
+
3
+ Aiham Taleb $^{1,\ast}$ , Winfried Loetzsch $^{1,\dagger}$ , Noel Danz $^{1,\dagger}$ , Julius Severin $^{1,\ast}$ , Thomas Gaertner $^{1,\dagger}$ , Benjamin Bergner $^{1,\ast}$ , and Christoph Lippert $^{1,\ast}$
4
+
5
+ $^{1}$ Digital Health & Machine Learning, Hasso-Plattner-Institute, Potsdam University, Germany
6
+ *{firstname.lastname}@hpi.de
7
+ †{firstname.lastname}@student.hpi.uni-potsdam.de
8
+
9
+ # Abstract
10
+
11
+ Self-supervised learning methods have witnessed a recent surge of interest after proving successful in multiple application fields. In this work, we leverage these techniques, and we propose 3D versions for five different self-supervised methods, in the form of proxy tasks. Our methods facilitate neural network feature learning from unlabeled 3D images, aiming to reduce the required cost for expert annotation. The developed algorithms are 3D Contrastive Predictive Coding, 3D Rotation prediction, 3D Jigsaw puzzles, Relative 3D patch location, and 3D Exemplar networks. Our experiments show that pretraining models with our 3D tasks yields more powerful semantic representations, and enables solving downstream tasks more accurately and efficiently, compared to training the models from scratch and to pretraining them on 2D slices. We demonstrate the effectiveness of our methods on three downstream tasks from the medical imaging domain: i) Brain Tumor Segmentation from 3D MRI, ii) Pancreas Tumor Segmentation from 3D CT, and iii) Diabetic Retinopathy Detection from 2D Fundus images. In each task, we assess the gains in data-efficiency, performance, and speed of convergence. Interestingly, we also find gains when transferring the learned representations, by our methods, from a large unlabeled 3D corpus to a small downstream-specific dataset. We achieve results competitive to state-of-the-art solutions at a fraction of the computational expense. We publish our implementations<sup>1</sup> for the developed algorithms (both 3D and 2D versions) as an open-source library, in an effort to allow other researchers to apply and extend our methods on their datasets.
12
+
13
+ # 1 Introduction
14
+
15
+ Due to technological advancements in 3D sensing, the need for machine learning-based algorithms that perform analysis tasks on 3D imaging data has grown rapidly in the past few years [1-3]. 3D imaging has numerous applications, such as in Robotic navigation, in CAD imaging, in Geology, and in Medical Imaging. While we focus on medical imaging as a test-bed for our proposed 3D algorithms in this work, we ensure their applicability to other 3D domains. Medical imaging plays a vital role in patient healthcare, as it aids in disease prevention, early detection, diagnosis, and treatment. Yet efforts to utilize advancements in machine learning algorithms are often hampered by the sheer expense of the expert annotation required [4]. Generating expert annotations of 3D medical images at scale is non-trivial, expensive, and time-consuming. Another related challenge in medical imaging is the relatively small sample sizes. This becomes more obvious when studying a particular disease, for instance. Also, gaining access to large-scale datasets is often difficult due to privacy concerns. Hence, scarcity of data and annotations are some of the main constraints for machine learning applications in medical imaging.
16
+
17
+ Several efforts have attempted to address these challenges, as they are common to other application fields of deep learning. A widely used technique is transfer learning, which aims to reuse the features of already trained neural networks on different, but related, target tasks. A common example is adapting the features from networks trained on ImageNet, which can be reused for other visual tasks, e.g. semantic segmentation. To some extent, transfer learning has made it easier to solve tasks with limited number of samples. However, as mentioned before, the medical domain is supervision-starved. Despite attempts to leverage ImageNet [5] features in the medical context [6-9], the difference in the distributions of natural and medical images is significant, i.e. generalizing across these domains is questionable and can suffer from dataset bias [10]. Recent analysis [11] has also found that such transfer learning offers limited performance gains, relative to the computational costs it incurs. Consequently, it is necessary to find better solutions for the aforementioned challenges.
18
+
19
+ A viable alternative is to employ self-supervised (unsupervised) methods, which proved successful in multiple domains recently. In these approaches, the supervisory signals are derived from the data. In general, we withhold some part of the data, and train the network to predict it. This prediction task defines a proxy loss, which encourages the model to learn semantic representations about the concepts in the data. Subsequently, this facilitates data-efficient fine-tuning on supervised downstream tasks, reducing significantly the burden of manual annotation. Despite the surge of interest in the machine learning community in self-supervised methods, only little work has been done to adopt these methods in the medical imaging domain. We believe that self-supervised learning is directly applicable in the medical context, and can offer cheaper solutions for the challenges faced by conventional supervised methods. Unlabelled medical images carry valuable information about organ structures, and self-supervision enables the models to derive notions about these structures with no additional annotation cost.
20
+
21
+ A particular aspect of most medical images, which received little attention by previous self-supervised methods, is their 3D nature [12]. The common paradigm is to cast 3D imaging tasks in 2D, by extracting slices along an arbitrary axis, e.g. the axial dimension. However, such tasks can substantially benefit from the full 3D spatial context, thus capturing rich anatomical information. We believe that relying on the 2D context to derive data representations from 3D images, in general, is a suboptimal solution, which compromises the performance on downstream tasks.
22
+
23
+ Our contributions. As a result, in this work, we propose five self-supervised tasks that utilize the full 3D spatial context, aiming to better adopt self-supervision in 3D imaging. The proposed tasks are: 3D Contrastive Predictive Coding, 3D Rotation prediction, 3D Jigsaw puzzles, Relative 3D patch location, and 3D Exemplar networks. These algorithms are inspired by their successful 2D counterparts, and to the best of our knowledge, most of these methods have never been extended to the 3D context, let alone applied to the medical domain. Several computational and methodological challenges arise when designing self-supervised tasks in 3D, due to the increased data dimensionality, which we address in our methods to ensure their efficiency. We perform extensive experiments using four datasets in three different downstream tasks, and we show that our 3D tasks result in rich data representations that improve data-efficiency and performance on three different downstream tasks. Finally, we publish the implementations of our 3D tasks, and also of their 2D versions, in order to allow other researchers to evaluate these methods on other imaging datasets.
24
+
25
+ # 2 Related work
26
+
27
+ In general, unsupervised representation learning can be formulated as learning an embedding space, in which data samples that are semantically similar are closer, and those that are different are far apart. The self-supervised family constructs such a representation space by creating a supervised proxy task from the data itself. Then, the embeddings that solve the proxy task will also be useful for other real-world downstream tasks. Several methods in this line of research have been developed recently, and they found applications in numerous fields [13]. In this work, we focus on methods that operate on images only.
28
+
29
+ Self-supervised methods differ in their core building block, i.e. the proxy task used to learn representations from unlabelled input data. A commonly used supervision source for proxy tasks is the spatial context from images, which was first inspired by the skip-gram Word2Vec [14] algorithm. This idea was generalized to images in [15], in which a visual representation is learned by predicting the position of an image patch relative to another. A similar work extended this patch-based approach to solve
30
+
31
+ Jigsaw Puzzles [16]. Other works have used different supervision sources, such as image colors [17], clustering [18], image rotation prediction [19], object saliency [20], and image reconstruction [21]. In recent works, Contrastive Predictive Coding (CPC) approaches [22, 23] advanced the results of self-supervised methods on multiple imaging benchmarks [24, 25]. These methods utilize the idea of contrastive learning in the latent space, similar to Noise Contrastive Estimation [26]. In 2D images, the model has to predict the latent representation for next (adjacent) image patches. Our work follows this line of research in the above works, however, our methods utilize the full 3D context.
32
+
33
+ While videos are rich with more types of supervisory signals [27-31], we discuss here a subset of these works that utilize 3D-CNNs to process input videos. In this context, 3D-CNNs are employed to simultaneously extract spatial features from each frame, and temporal features across multiple frames, which are typically stacked along the $3^{\mathrm{rd}}$ (depth) dimension. The idea of exploiting 3D convolutions for videos was proposed in [32] for human action recognition, and was later extended to other applications [13]. In self-supervised learning, however, the number of pretext tasks that exploit this technique is limited. Kim et al. [33] proposed a task that extracts cubic puzzles of $2 \times 2 \times 1$ , meaning that the $3^{\mathrm{rd}}$ dimension is not actually utilized in puzzle creation. Jing et al. [34] extended the rotation prediction task [19] to videos, by simply stacking video frames along the depth dimension, however, this dimension is not employed in the design of their task as only spatial rotations are considered. Han et al. proposed a dense encoding of spatio-temporal frame blocks to predict future scene representations recurrently, in conjunction with a curriculum training scheme to extend the predicted future. Similarly, the depth dimension is not employed in this task. On the other hand, in our more general versions of 3D Jigsaw puzzles and 3D Rotation prediction, respectively, we exploit the depth ( $3^{\mathrm{rd}}$ ) dimension in the design of our tasks. For instance, we solve larger 3D puzzles up to $3 \times 3 \times 3$ , and we also predict more rotations along all axes in the 3D space. Furthermore, in our 3D Contrastive Predictive Coding task, we predict patch representations along all 3 dimensions, scanning input volumes in a manner that resembles a pyramid. In general, we believe the different nature of the data, 3D volumetric scans vs. stacked video frames, influences the design of proxy tasks, i.e. the depth dimension has an actual semantic meaning in volumetric scans. Hence, we consider the whole 3D context when designing all of our methods, aiming to learn valuable anatomical information from unlabeled 3D volumetric scans.
34
+
35
+ In the medical context, self-supervision has found use-cases in diverse applications such as depth estimation in monocular endoscopy [35], robotic surgery [36], medical image registration [37], body part recognition [38], in disc degeneration using spinal MRIs [39], in cardiac image segmentation [40], body part regression for slice ordering [41], and medical instrument segmentation [42]. Spitzer et al. [43] sample 2D patches from a 3D brain, and predict the distance between these patches as a supervision signal. Tajbakhsh et al. [44] use orientation prediction from medical images as a proxy task. There are multiple other examples of self-supervised methods for medical imaging, such as [45-49]. While these attempts are a step forward for self-supervised learning in medical imaging, they have some limitations. First, as opposed to our work, many of these works make assumptions about input data, resulting in engineered solutions that hardly generalize to other target tasks. Second, none of the above works capture the complete spatial context available in 3-dimensional scans, i.e. they only operate on 2D/2.5D spatial context. In a more related work, Zhou et al. [50] extended image reconstruction techniques from 2D to 3D, and implemented multiple self-supervised tasks based on image-reconstruction. Zhuang et al. [51] and Zhu et al. [52] developed a proxy task that solves small 3D jigsaw puzzles. Their proposed puzzles were only limited to $2 \times 2 \times 2$ of puzzle complexity. Our version of 3D Jigsaw puzzles is able to efficiently solve larger puzzles, e.g. $3 \times 3 \times 3$ , and outperforms their method's results on the downstream task of Brain tumor segmentation. In this paper, we continue this line of work, and develop five different algorithms for 3D data, whose nature and performance can accommodate more types of target medical applications.
36
+
37
+ # 3 Self-Supervised Methods
38
+
39
+ In this section, we discuss the formulations of our 3D self-supervised pretext tasks, all of which learn data representations from unlabeled samples (3D images), hence requiring no manual annotation effort in the self-supervised pretraining stage. Each task results in a pretrained encoder model $g_{enc}$ that can be fine-tuned in various downstream tasks, subsequently.
40
+
41
+ ![](images/fc4e1a257b6a0954ac7f16cc24705ccc8f4f713c959125e20390ad6ea6bde3a9.jpg)
42
+ (a)
43
+
44
+ ![](images/d428aa8efb7e046bb3027c7991fe7eeb33733594bcde614a4f3105e8f1806fa0.jpg)
45
+ (b)
46
+
47
+ ![](images/e20f383f115db637795112a02b7a17b001f5d910679de743d4dd104dd01dde3f.jpg)
48
+ (c)
49
+
50
+ ![](images/571971f46508d2eabbc0db3a59f04a642e838b7e76b7e40e699067e020003875.jpg)
51
+ (d)
52
+
53
+ ![](images/93b529183385cd34503820fdc22297904af0ce6cdbafe644a2c93a65012e4265.jpg)
54
+ (e)
55
+ Figure 1: (a) 3D-CPC: each input image is split into 3D patches, and the latent representations $z_{i+1,j,k}, z_{i+2,j,k}$ of next patches $x_{i+1,j,k}, x_{i+2,j,k}$ (shown in green) are predicted using the context vector $c_{i,j,k}$ . The considered context is the current patch $x_{i,j,k}$ (shown in orange), plus the above patches that form an inverted pyramid (shown in blue). (b) 3D-RPL: assuming a 3D grid of 27 patches $(3 \times 3 \times 3)$ , the model is trained to predict the location $y_{q}$ of the query patch $x_{q}$ (shown in red), relative to the central patch $x_{c}$ (whose location is 13). (c) 3D-Jig: by predicting the permutation applied to the 3D image when creating a $3 \times 3 \times 3$ puzzle, we are able to reconstruct the scrambled input. (d) 3D-Rot: the network is trained to predict the rotation degree (out of the 10 possible degrees) applied on input scans. (e) 3D-Exe: the network is trained with a triplet loss, which drives positive samples closer in the embedding space $(x_{i}^{+} \text{ to } x_{i})$ , and the negative samples $(x_{i}^{-})$ farther apart.
56
+
57
+ # 3.1 3D Contrastive Predictive Coding (3D-CPC)
58
+
59
+ Following the contrastive learning idea, first proposed in [26], this universal unsupervised technique predicts the latent space for future (next or adjacent) samples. Recently, CPC found success in multiple application fields, e.g. its 1D version in audio signals [22], and its 2D versions in images [22, 23], and was able to bridge the gap between unsupervised and fully-supervised methods [24]. Our proposed CPC version generalizes this technique to 3D inputs, and defines a proxy task by cropping equally-sized and overlapping 3D patches from each input scan. Then, the encoder model $g_{enc}$ maps each input patch $x_{i,j,k}$ to its latent representation $z_{i,j,k} = g_{enc}(x_{i,j,k})$ . Next, another model called the context network $g_{cxt}$ is used to summarize the latent vectors of the patches in the context of $x_{i,j,k}$ , and produce its context vector $c_{i,j,k} = g_{cxt}(\{z_{u,v,w}\}_{u\leq i,v,w})$ , where $\{z\}$ denotes a set of latent vectors. Finally, because $c_{i,j,k}$ captures the high level content of the context that corresponds to $x_{i,j,k}$ , it allows for predicting the latent representations of next (adjacent) patches $z_{i + l,j,k}$ , where $l\geq 0$ . This prediction task is cast as an $N$ -way classification problem by utilizing the InfoNCE loss [22], which takes its name from its ability to maximize the mutual information between $c_{i,j,k}$ and $z_{i + l,j,k}$ . Here, the classes are the latent representations $\{z\}$ of the patches, among which is one positive representation, and the rest $N - 1$ are negative. Formally, the CPC loss can be written as
60
+
61
+ follows:
62
+
63
+ $$
64
+ \begin{array}{l} \mathcal {L} _ {C P C} = - \sum_ {i, j, k, l} \log p \left(z _ {i + l, j, k} \mid \hat {z} _ {i + l, j, k}, \left\{z _ {n} \right\}\right) \\ = - \sum_ {i, j, k, l} \log \frac {\exp \left(\hat {z} _ {i + l , j , k} z _ {i + l , j , k}\right)}{\exp \left(\hat {z} _ {i + l , j , k} z _ {i + l , j , k}\right) + \exp \left(\sum_ {n} \hat {z} _ {i + l , j , k} z _ {n}\right)} \tag {1} \\ \end{array}
65
+ $$
66
+
67
+ This loss corresponds to the categorical cross-entropy loss, which trains the model to recognize the correct representation $z_{i + l,j,k}$ among the list of negative representations $\{z_n\}$ . These negative samples (3D patches) are chosen randomly from other locations in the input image. In practice, similar to the original NCE [26], this task is solved as a binary pairwise classification task.
68
+
69
+ It is noteworthy that the proposed 3D-CPC task, illustrated in Fig. 1 (a), allows employing any network architecture in the encoder $g_{enc}$ and the context $g_{cxt}$ networks. In our experiments, we follow [22] in using an autoregressive network using GRUs [53] for the context network $g_{cxt}$ , however, masked convolutions can be a valid alternative [54]. In terms of what the 3D context of each patch $x_{i,j,k}$ includes, we follow the idea of an inverted pyramid neighborhood, which is inspired from [55, 56]. This context is chosen based on a tradeoff between computational cost and performance. Too large contexts (e.g. full surrounding of a patch) incur prohibitive computations and memory use. The inverted-pyramid context was an optimal tradeoff.
70
+
71
+ # 3.2 Relative 3D patch location (3D-RPL)
72
+
73
+ In this task, the spatial context in images is leveraged as a rich source of supervision, in order to learn semantic representations of the data. First proposed by Doersch et al. [15] for 2D images, this task inspired several works in self-supervision. In our 3D version, shown in Fig. 1 (b), we leverage the full 3D spatial context in the design of our task. From each input 3D image, a 3D grid of $N$ non-overlapping patches $\{x_{i}\}_{i\in \{1,\dots ,N\}}$ is sampled at random locations. Then, the patch $x_{c}$ in the center of the grid is used as a reference, and a query patch $x_{q}$ is selected from the surrounding $N - 1$ patches. Next, the location of $x_{q}$ relative to $x_{c}$ is used as the positive label $y_{q}$ . This casts the task as an $N - 1$ -way classification problem, in which the locations of the remaining grid patches are used as the negative samples $\{y_{n}\}$ . Formally, the cross-entropy loss in this task is written as:
74
+
75
+ $$
76
+ \mathcal {L} _ {R P L} = - \sum_ {k = 1} ^ {K} \log p \left(y _ {q} \mid \hat {y} _ {q}, \left\{y _ {n} \right\}\right) \tag {2}
77
+ $$
78
+
79
+ Where $K$ is the number of queries extracted from all samples. In order to prevent the model from solving this task quickly by finding shortcut solutions, e.g. edge continuity, we follow [15] in leaving random gaps (jitter) between neighboring 3D patches. More details in Appendix.
80
+
81
+ # 3.3 3D Jigsaw puzzle Solving (3D-Jig)
82
+
83
+ Deriving a Jigsaw puzzle grid from an input image, be it in 2D or 3D, and solving it can be viewed as an extension to the above patch-based RPL task. In our 3D Jigsaw puzzle task, which is inspired by its 2D counterpart [16] and illustrated in Fig. 1 (c), the puzzles are formed by sampling an $n \times n \times n$ grid of 3D patches. Then, these patches are shuffled according to an arbitrary permutation, selected from a set of predefined permutations. This set of permutations with size $P$ is chosen out of the $n^3!$ possible permutations, by following the Hamming distance based algorithm in [16] (details in Appendix), and each permutation is assigned an index $y_p \in \{1,..,P\}$ . Therefore, the problem is cast as a $P$ -way classification task, i.e., the model is trained to simply recognize the applied permutation index $p$ , allowing us to solve the 3D puzzles in an efficient manner. Formally, we minimize the cross-entropy loss of $\mathcal{L}_{Jig}(y_p^k,\hat{y}_p^k)$ , where $k \in \{1,..,K\}$ is an arbitrary 3D puzzle from the list of extracted $K$ puzzles. Similar to 3D-RPL, we use the trick of adding random jitter in 3D-Jig.
84
+
85
+ # 3.4 3D Rotation prediction (3D-Rot)
86
+
87
+ Originally proposed by Gidaris et al. [19], the rotation prediction task encourages the model to learn visual representations by simply predicting the angle by which the input image is rotated. The intuition behind this task is that for a model to successfully predict the angle of rotation, it needs
88
+
89
+ to capture sufficient semantic information about the object in the input image. In our 3D Rotation prediction task, 3D input images are rotated randomly by a random degree $r \in \{1,..,R\}$ out of the $R$ considered degrees. In this task, for simplicity, we consider the multiples of 90 degrees $(0^{\circ},90^{\circ},180^{\circ},270^{\circ}$ , along each axis of the 3D coordinate system $(x,y,z)$ . There are 4 possible rotations per axis, amounting to 12 possible rotations. However, rotating input scans by $0^{\circ}$ along the 3 axes will produce 3 identical versions of the original scan, hence, we consider 10 rotation degrees instead. Therefore, in this setting, this proxy task can be solved as a 10-way classification problem. Then, the model is tasked to predict the rotation degree (class), as shown in Fig. 1 (d). Formally, we minimize the cross-entropy loss $\mathcal{L}_{Rot}(r^{k},\hat{r}^{k})$ , where $k \in \{1,..,K\}$ is an arbitrary rotated 3D image from the list of $K$ rotated images. It is noteworthy that we create multiple rotated versions for each 3D image.
90
+
91
+ # 3.5 3D Exemplar networks (3D-Exe)
92
+
93
+ The task of Exemplar networks, proposed by Dosovitskiy et al. [57], is one of the earliest methods in the self-supervised family. To derive supervision labels, it relies on image augmentation techniques, i.e. transformations. Assuming a training set $X = \{x_{1},\dots x_{N}\}$ , and a set of $K$ image transformations $\mathcal{T} = \{T_1,\dots T_K\}$ , a new surrogate class $S_{x_i}$ is created by transforming each training sample $x_{i}\in X$ where $S_{x_i} = \mathcal{T}x_i = \{Tx_i\mid T\in \mathcal{T}\}$ . Therefore, the task is cast as a regular classification task with a cross-entropy loss. However, this classification task becomes prohibitively expensive as the dataset size grows larger, as the number of classes grows accordingly. Thus, in our proposed 3D version of Exemplar networks, shown in Fig. 1 (e), we employ a different mechanism that relies on the triplet loss instead [58]. Formally, assuming $x_{i}$ is a random training sample and $z_{i}$ is its corresponding embedding vector, $x_{i}^{+}$ is a transformed version of $x_{i}$ (seen as a positive example) with an embedding $z_{i}^{+}$ , and $x_{i}^{-}$ is a different sample from the dataset (seen as negative) with an embedding $z_{i}^{-}$ . The triplet loss is written as follows:
94
+
95
+ $$
96
+ \mathcal {L} _ {E x e} = \frac {1}{N _ {T}} \sum_ {i = 1} ^ {N _ {T}} \max \left\{0, D \left(z _ {i}, z _ {i} ^ {+}\right) - D \left(z _ {i}, z _ {i} ^ {-}\right) + \alpha \right\} \tag {3}
97
+ $$
98
+
99
+ where $D(.)$ is a pairwise distance function, for which we use the $L_{2}$ distance, following [59]. $\alpha$ is a margin (gap) that is enforced between positive and negative pairs, which we set to 1. The triplet loss enforces $D(z_{i},z_{i}^{-}) > D(z_{i},z_{i}^{+})$ , i.e. the transformed versions of the same sample (positive samples) to come closer to each other in the learned embedding space, and farther away from other (negative) samples. Replacing the triplet loss with a contrastive loss [26] is possible in this method, and has been found to improve learned representations from natural images [24]. In addition, the learned representations by Exemplar can be affected by the negatives sampling strategy. The simple option is to sample from within the same batch, however, it is also possible to sample from the whole dataset. The latter choice is computationally more expensive, but is expected to improve the learned representations, as it makes the task harder. It is noteworthy that we apply the following 3D transformations: random flipping along an arbitrary axis, random rotation along an arbitrary axis, random brightness and contrast, and random zooming.
100
+
101
+ # 4 Experimental Results
102
+
103
+ In this section, we present the evaluation results of our methods, which we assess the quality of their learned representations by fine-tuning them on three downstream tasks. In each task, we analyze the obtained gains in data-efficiency, performance, and speed of convergence. In addition, each task aims to demonstrate a certain use-case for our methods. We follow the commonly used evaluation protocols for self-supervised methods in each of these tasks. The chosen tasks are:
104
+
105
+ - Brain Tumor Segmentation from 3D MRI (Subsection 4.1): in which we study the possibility for transfer learning from a different unlabeled 3D corpus, following [60].
106
+ - Pancreas Tumor Segmentation from 3D CT (Subsection 4.2): to demonstrate how to use the same unlabeled dataset, following the data-efficient evaluation protocol in [23].
107
+ - Diabetic Retinopathy Detection from 2D Fundus Images (Subsection 4.3): to showcase our implementations for the 2D versions of our methods, following [23]. Here, we also evaluate pretraining on a different large corpus, then fine-tuning on the downstream dataset.
108
+
109
+ We provide additional details about architectures, training procedures, the effect of augmentation in Exemplar, and how we initialize decoders for segmentation tasks in the Appendix.
110
+
111
+ # 4.1 Brain Tumor Segmentation Results
112
+
113
+ In this task, we evaluate our methods by fine-tuning the learned representations on the Multimodal Brain Tumor Segmentation (BraTS) 2018 [61, 62] benchmark. Before that, we pretrain our models on brain MRI data from the UK Biobank [63] (UKB) corpus, which contains roughly $22K$ 3D scans. Due to this large number of unlabeled scans, UKB is suitable for unsupervised pretraining. The BraTS dataset contains annotated MRI scans for 285 training and 66 validation cases. We fine-tune on BraTS' training set, and evaluate on its validation set. Following the official BraTS challenge, we report Dice scores for the Whole Tumor (WT), Tumor Core (TC), and Enhanced Tumor (ET) tasks. The Dice score (F1-Score) is twice the area of overlap between two segmentation masks divided by the total number of pixels in both. In order to assess the quality of the learned representations by our 3D proxy tasks, we compare to the following baselines:
114
+
115
+ - Training from scratch: the first sensible baseline for any self-supervised method, in general, is the same model trained on the downstream task when initialized from random weights. Comparing to this baseline provides insights about the benefits of self-supervised pretraining.
116
+ - Training on 2D slices: this baseline aims to quantitatively show how our proposal to operate on the 3D context benefits the learned representations, compared to 2D methods.
117
+ - Supervised pretraining: this baseline uses automatic segmentation labels from FSL-FAST [64], which include masks for three brain tissues.
118
+ - Baselines from the BraTS challenge: we compare to the methods [65-68], which all use a single model with an architecture similar to ours, i.e. 3D U-Net [69].
119
+
120
+ Discussion. We first assess the gains in data-efficiency in this task. To quantify these gains, we measure the segmentation performance at different sample sizes. We randomly select subsets of patients at $10\%$ , $25\%$ , $50\%$ , and $100\%$ of the full dataset size, and we fine-tune our models on these subsets. Here, we compare to the baselines listed above. As shown in Fig. 2, our 3D methods outperform the baseline model trained from scratch by a large margin when using few training samples, and behaves similarly as the number of labeled samples increases. The low-data regime case at $5\%$ suggests the potential for generic unsupervised features, and highlights the huge gains in data-efficiency. Also, the proposed 3D versions considerably outperform their 2D counterparts, which are trained on slices extracted from the 3D images. We also measure how our methods affect the final brain tumor segmentation performance, in Table 1. All our methods outperform the baseline trained from scratch as well as their 2D counterparts, confirming the benefits of pretraining with our 3D tasks on downstream performance. We also achieve comparable results to baselines from the BraTS challenge, and we outperform these baselines in some cases, e.g., our 3D-RPL method outperforms all baselines in terms of ET and TC dice scores. Also, our model pretrained with 3D-Exemplar, with fewer downstream training epochs, matches the result of Isensee et al. [65] in terms of WT dice score, which is one of the top results on the BraTS 2018 challenge. In comparison to the supervised baseline using automatic FAST labels, we find that our results are comparable, outperforming this baseline in some cases. Our results in this downstream task also demonstrate the generalization ability of our 3D tasks across different domains. This is result is significant, because medical datasets are supervision-starved, e.g., images may be collected as part of clinical routine, but much fewer (high-quality) labels are produced, due to annotation costs.
121
+
122
+ # 4.2 Pancreas Tumor Segmentation Results
123
+
124
+ In this downstream task, we evaluate our models on 3D CT scans of Pancreas tumor from the medical decathlon benchmarks [70]. The Pancreas dataset contains annotated CT scans for 420 cases. Each scan in this dataset contains 3 different classes: pancreas (class 1), tumor (class 2), and background (class 0). To measure the performance on this benchmark, two dice scores are computed for classes 1 and 2. In this task, we pretrain using our proposed 3D tasks on pancreas scans without their annotation masks. Then, we fine-tune the obtained models on subsets of annotated data to assess the gains in both data-efficiency and performance. Finally, we also compare to the baseline model trained from scratch and to 2D models, similar to the previous downstream task. Fig. 3 demonstrates the gains
125
+
126
+ ![](images/9858f662aed3576dc9f64a4feb7f653bfeca2882a091ed22e80d160f047d8cd2.jpg)
127
+ Figure 2: Data-efficient segmentation results in BraTS. With less labeled data, the supervised baseline (brown) fails to generalize, as opposed to our methods. Also, the proposed 3D methods outperform all 2D counterparts.
128
+
129
+ Table 1: BraTS segmentation results
130
+
131
+ <table><tr><td>Model</td><td>ET</td><td>WT</td><td>TC</td></tr><tr><td>3D-From scratch</td><td>76.38</td><td>87.82</td><td>83.11</td></tr><tr><td>3D Supervised</td><td>78.88</td><td>90.11</td><td>84.92</td></tr><tr><td>2D-CPC</td><td>76.60</td><td>86.27</td><td>82.41</td></tr><tr><td>2D-RPL</td><td>77.53</td><td>87.91</td><td>82.56</td></tr><tr><td>2D-Jigsaw</td><td>76.12</td><td>86.28</td><td>83.26</td></tr><tr><td>2D-Rotation</td><td>76.60</td><td>88.78</td><td>82.41</td></tr><tr><td>2D-Exemplar</td><td>75.22</td><td>84.82</td><td>81.87</td></tr><tr><td>Popli et al. [66]</td><td>74.39</td><td>89.41</td><td>82.48</td></tr><tr><td>Baid et al. [67]</td><td>74.80</td><td>87.80</td><td>82.66</td></tr><tr><td>Chandra et al. [68]</td><td>74.06</td><td>87.19</td><td>79.89</td></tr><tr><td>Isensee et al. [65]</td><td>80.36</td><td>90.80</td><td>84.32</td></tr><tr><td>3D-CPC</td><td>80.83</td><td>89.88</td><td>85.11</td></tr><tr><td>3D-RPL</td><td>81.28</td><td>90.71</td><td>86.12</td></tr><tr><td>3D-Jigsaw</td><td>79.66</td><td>89.20</td><td>82.52</td></tr><tr><td>3D-Rotation</td><td>80.21</td><td>89.63</td><td>84.75</td></tr><tr><td>3D-Exemplar</td><td>79.46</td><td>90.80</td><td>83.87</td></tr></table>
132
+
133
+ when fine-tuning our models on $5\%$ , $10\%$ , $50\%$ , and $100\%$ of the full data size. The results obtained by our 3D methods also outperform the baselines in this task with a margin when using only few training samples, e.g. $5\%$ and $10\%$ cases. Another significant benefit offered by pretraining with our methods is the speed of convergence on downstream tasks. As demonstrated in Fig 5, when training on the full pancreas dataset, within the first 20 epochs only, our models achieve much higher performances compared to the "from scratch" baseline. We should note that we evaluate this task on a held-out labeled subset of the Pancreas dataset that was not used for pretraining nor fine-tuning. We provide the full list of experimental results for this task in Appendix.
134
+
135
+ # 4.3 Diabetic Retinopathy Results
136
+
137
+ As part of our work, we also provide implementations for the 2D versions of the developed self-supervised methods. We showcase these implementations on the Diabetic Retinopathy 2019 Kaggle challenge 4.3. This dataset contains roughly 5590 Fundus 2D images, each of which was rated by a clinician on a severity scale of 0 to 4. These levels define a classification task. In order to evaluate our tasks on this benchmark, we pretrain all the 2D versions of our methods using 2D Fundus images from UK Biobank [63]. The retinopathy data in UK Biobank contains $170K$ images. We then fine-tune the obtained models on Kaggle data, meaning performing transfer learning. We also compare the obtained results with this transfer learning protocol to those obtained with the data-efficient evaluation protocol in [23], i.e. pretraining on the same Kaggle dataset and fine-tuning on subsets of it. To assess the gains in data-efficiency, we fine-tune the obtained models on subsets of labelled Kaggle data, shown in Fig. 4. It is noteworthy that pretraining on UKB produces results that outperform those obtained when pretraining on the same Kaggle dataset. This confirms the benefits of transfer learning from a large corpus to a smaller one using our methods. Gains in speed of convergence are also shown in Fig. 6. In this 2D task, we achieve results consistent with the other downstream tasks, presented before. We should point out that we evaluate with 5-fold cross validation on this 2D dataset. The metric used in task, as in the Kaggle challenge, is the Quadratic Weighted Kappa, which measures the agreement between two ratings. Its values vary from random (0) to complete (1) agreement, and if there is less agreement than chance it may become negative.
138
+
139
+ # 5 Conclusion
140
+
141
+ In this work, we asked whether designing 3D self-supervised tasks could benefit the learned representations from unlabeled 3D images, and found that it indeed greatly improves their downstream performance, especially when fine-tuned on only small amounts of labeled 3D data. We demonstrate the obtained gains by our proposed 3D algorithms in data-efficiency, performance, and speed of convergence on three different downstream tasks. Our 3D tasks outperform their 2D counterparts, hence supporting our proposal of utilizing the 3D spatial context in the design of self-supervised tasks, when operating on 3D domains. What is more, our results, particularly in the low-data regime,
142
+
143
+ ![](images/a6b342e48f12ce2b4beca202cf80e83d1b0d784b072053e4498bbf03de2a09b0.jpg)
144
+ Figure 3: Data-efficient segmentation results in Pancreas. With less labeled data, the supervised baseline (brown) fails to generalize, as opposed to our methods. Also, the proposed 3D methods outperform all 2D counterparts
145
+
146
+ ![](images/4c66238a7ac9bc3d0f485de96ad85ee1a3229b65caa8248ce53ca8ac09088bbe.jpg)
147
+ Figure 5: Speed of convergence in Pancreas segmentation. Our models converge faster than the baseline (brown)
148
+
149
+ ![](images/1dcb267cf2ee1860c48337c0b90bcb4c6b09449b6099b76dfb171c138c5a5708.jpg)
150
+ Figure 4: Data-efficient classification in Diabetic Retinopathy. With less labels, the supervised baseline (brown) fails to generalize, as opposed to pretrained models. This result is consistent with the other downstream tasks
151
+
152
+ ![](images/9c561c8694de6659f1e23fd808c7cacbc8e94b95b315a5bbedbcc230e3554309.jpg)
153
+ Figure 6: Speed of convergence in Retinopathy classification. Our models also converge faster in this task
154
+
155
+ demonstrate the possibility to reduce the manual annotation effort required in the medical imaging domain, where data and annotation scarcity is an obstacle. Furthermore, we observe performance gains when pretraining our methods on a large unlabeled corpus, and fine-tuning them on a different smaller downstream-specific dataset. This result suggests alternatives for transfer learning from Imagenet features, which can be substantially different from the medical domain. Finally, we open source our implementations for all 3D methods (and also their 2D versions), and we publish them to help other researchers apply our methods on other medical imaging tasks. This work is only a first step toward creating a set of methods that facilitate self-supervised learning research for 3D data, e.g. medical scans. We believe there is room for improvement along this line, such as designing new 3D proxy tasks, evaluating different architectural options, and including other data modalities (e.g. text) in conjunction with images/scans.
156
+
157
+ # Broader Impact
158
+
159
+ Due to technological advancements in 3D data sensing, and to the growing number of its applications, the attention to machine learning algorithms that perform analysis tasks on such data has grown rapidly in the past few years. As mentioned before, 3D imaging has multitude of applications [2], such as in Robotics, in CAD imaging, in Geology, and in Medical Imaging. In this work, we developed multiple 3D Deep Learning algorithms, and evaluated them on multiple 3D medical imaging benchmarks. Our focus on medical imaging is motivated by the pressing demand for automatic (and instant) analysis systems, that may aid the medical community.
160
+
161
+ Medical imaging plays an important role in patient healthcare, as it aids in disease prevention, early detection, diagnosis, and treatment. With the continuous digitization of medical images, the hope that physicians and radiologists are able to instantly analyze them with Machine Learning algorithms is slowly shaping as a reality. Achieving this has become more critical recently, as the number of patients which contracted with a novel Coronavirus, called COVID-19, reached a high record. Radiography images provide a rich and a quick diagnosis tool, because other types of tests, e.g. RT-PCR which is an RNA/DNA based test, have low sensitivity and may require hours/days of processing [71]. Therefore, as imaging allows such instant insights into human body organs, it receives growing attention from both machine learning and medical communities.
162
+
163
+ Yet efforts to leverage advancements in machine learning, particularly the supervised algorithms, are often hampered by the sheer expense of expert annotation required [4]. Generating expert annotations of patient data at scale is non-trivial, expensive, and time-consuming, especially for 3D medical scans. Even current semi-automatic software tools fail to sufficiently address this challenge. Consequently, it is necessary to rely on annotation-efficient machine learning algorithms, such as self-supervised (unsupervised) approaches for representation learning from unlabelled data. Our work aims to provide the necessary tools for 3D image analysis, in general, and to aid physicians and radiologists in their diagnostic tasks from 3D scans, in particular. And as the main consequence of this work, the developed methods can help reduce the effort and cost of annotation required by these practitioners. In the larger goal of leveraging Machine Learning for good, our work is only a small step toward achieving this goal for patient healthcare.
164
+
165
+ # Acknowledgments and Disclosure of Funding
166
+
167
+ This research has been supported by funding from the German Federal Ministry of Education and Research (BMBF) in the project KI-LAB-ITSE (project number 01|S19066). This research has been conducted using the UK Biobank Resource.
168
+
169
+ # References
170
+
171
+ [1] David Griffiths and Jan Boehm. A review on deep learning techniques for 3d sensed data classification. CoRR, abs/1907.04444, 2019. URL http://arxiv.org/abs/1907.04444.
172
+ [2] Anastasia Ioannidou, Elisavet Chatzilari, Spiros Nikolopoulos, and Ioannis Kompatsiaris. Deep learning advances in computer vision with 3d data: A survey. ACM Computing Surveys, 50, 06 2017. doi: 10.1145/3042064.
173
+ [3] Hao Su, Leonidas Guibas, Michael Bronstein, Evangelos Kalogerakis, Jimei Yang, Charles Qi, and Qixing Huang. 3D Deep Learning, 2017 (accessed June 2, 2020). URL http://3ddl.stanford.edu/.
174
+ [4] Katharina Grünberg, Oscar Jimenez-del Toro, Andras Jakab, Georg Langs, Tomás Salas Fernandez, Marianne Winterstein, Marc-Andre Weber, and Markus Krenn. Annotating Medical Image Data, pages 45–67. Springer International Publishing, Cham, 2017.
175
+ [5] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR09, Miami, FL, USA, 2009. IEEE.
176
+ [6] Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M. Summers. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3462-3471, 2017.
177
+
178
+ [7] Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Yi Ding, Aarti Bagul, Curtis Langlotz, Katie S. Shpanskaya, Matthew P. Lungren, and Andrew Y. Ng. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. CoRR, abs/1711.05225, 2017. URL http://arxiv.org/abs/1711.05225.
179
+ [8] Jaakko Sahlsten, Joel Jaskari, Jyri Kivinen, Lauri Turunen, Esa Jaanio, Kustaa Hietala, and Kimmo Kaski. Deep learning fundus image analysis for diabetic retinopathy and macular edema grading. Scientific Reports, 9, 12 2019. doi: 10.1038/s41598-019-47181-w.
180
+ [9] Sheikh Muhammad Saiful Islam, Md Mahedi Hasan, and Sohaib Abdullah. Deep learning based early detection and grading of diabetic retinopathy using retinal fundus images. CoRR, abs/1812.10595, 2018. URL http://arxiv.org/abs/1812.10595.
181
+ [10] Antonio Torralba and Alexey A. Efros. Unbiased look at dataset bias. In CVPR 2011, pages 1521-1528, 2011.
182
+ [11] Maithra Raghu, Chiyuan Zhang, Jon Kleinberg, and Samy Bengio. Transfusion: Understanding transfer learning for medical imaging. In Advances in Neural Information Processing Systems 32, pages 3347-3357. Curran Associates, Inc., 2019. URL http://papers.nips.cc/paper/8596-transfusion-understanding-transfer-learning-for-medical-imaging.pdf.
183
+ [12] Ronald Eisenberg and Alexander Margulis. A Patient's Guide to Medical Imaging. New York: Oxford University Press, NY, USA, 2011.
184
+ [13] Longlong Jing and Yingli Tian. Self-supervised visual feature learning with deep neural networks: A survey. CoRR, abs/1902.06162, 2019. URL http://arxiv.org/abs/1902.06162.
185
+ [14] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In Yoshua Bengio and Yann LeCun, editors, 1st International Conference on Learning Representations, ICLR 2013, May 2-4, 2013, Workshop Track Proceedings, Scottsdale, Arizona, USA, 2013. OpenReview. URL http://arxiv.org/abs/1301.3781.
186
+ [15] Carl Doersch, Abhinav Gupta, and Alexei A. Efros. Unsupervised visual representation learning by context prediction. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), ICCV '15, page 1422-1430, USA, 2015. IEEE Computer Society. ISBN 9781467383912. doi: 10.1109/ICCV.2015.167. URL https://doi.org/10.1109/ICCV.2015.167.
187
+ [16] Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, Computer Vision – ECCV 2016, pages 69–84, Cham, 2016. Springer International Publishing. ISBN 978-3-319-46466-4.
188
+ [17] Richard Zhang, Phillip Isola, and Alexei A. Efros. Colorful image colorization. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, Computer Vision – ECCV 2016, pages 649–666, Cham, 2016. Springer International Publishing. ISBN 978-3-319-46487-9.
189
+ [18] Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In The European Conference on Computer Vision (ECCV), Munich, Germany, September 2018. Springer.
190
+ [19] Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. CoRR, abs/1803.07728, 2018. URL http://arxiv.org/abs/1803.07728.
191
+ [20] Jiawei Wang, Shuai Zhu, Jiao Xu, and Da Cao. The retrieval of the beautiful: Self-supervised salient object detection for beauty product retrieval. In Proceedings of the 27th ACM International Conference on Multimedia, MM '19, page 2548-2552, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450368896. doi: 10.1145/3343031.3356059. URL https://doi.org/10.1145/3343031.3356059.
192
+ [21] Deepak Pathak, Philipp Krahenbihl, Jeff Donahue, Trevor Darrell, and Alexei Efros. Context encoders: Feature learning by inpainting. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
193
+ [22] Aäron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. CoRR, abs/1807.03748, 2018. URL http://arxiv.org/abs/1807.03748.
194
+ [23] Olivier J. Henaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, S. M. Ali Eslami, and Aïron van den Oord. Data-efficient image recognition with contrastive predictive coding. CoRR, abs/1905.09272, 2019. URL http://arxiv.org/abs/1905.09272.
195
+
196
+ [24] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations, 2020.
197
+ [25] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
198
+ [26] Michael Gutmann and Aapo Hyvarinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Yee Whye Teh and Mike Titterington, editors, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 297–304, Chia Laguna Resort, Sardinia, Italy, 13–15 May 2010. PMLR. URL http://proceedings.mlr.press/v9/gutmann10a.html.
199
+ [27] Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 2794-2802, 2015.
200
+ [28] Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Anticipating the future by watching unlabeled video. CoRR, abs/1504.08023, 2015. URL http://arxiv.org/abs/1504.08023.
201
+ [29] Jacob Walker, Abhinav Gupta, and Martial Hebert. Dense optical flow prediction from a static image. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 2443-2451, 2015.
202
+ [30] Senthil Purushwalkam and Abhinav Gupta. Pose from action: Unsupervised learning of pose features based on motion. CoRR, abs/1609.05420, 2016. URL http://arxiv.org/abs/1609.05420.
203
+ [31] Carl Vondrick, Abhinav Shrivastava, Alireza Fathi, Sergio Guadarrama, and Kevin Murphy. Tracking emerges by colorizing videos. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018.
204
+ [32] S. Ji, W. Xu, M. Yang, and K. Yu. 3d convolutional neural networks for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):221-231, 2013.
205
+ [33] Kim Dahun, Donghyeon Cho, and Soo-Ok Kweon. Self-supervised video representation learning with space-time cubic puzzles. Proceedings of the AAAI Conference on Artificial Intelligence, 33:8545–8552, 07 2019. doi: 10.1609/aaai.v33i01.33018545.
206
+ [34] Longlong Jing and Yingli Tian. Self-supervised spatiotemporal feature learning by video geometric transformations. CoRR, abs/1811.11387, 2018. URL http://arxiv.org/abs/1811.11387.
207
+ [35] Xingtong Liu, Ayushi Sinha, Mathias Unberath, Masaru Ishii, Gregory D. Hager, Russell H. Taylor, and Austin Reiter. Self-supervised learning for dense depth estimation in monocular endoscopy. CoRR, abs/1806.09521, 2018. URL http://arxiv.org/abs/1806.09521.
208
+ [36] Menglong Ye, Edward Johns, Ankur Handa, Lin Zhang, Philip Pratt, and Guang Yang. Self-supervised siamese learning on stereo image pairs for depth estimation in robotic surgery. In The Hamlyn Symposium on Medical Robotics, pages 27-28, 06 2017. doi: 10.31256/HSMR2017.14.
209
+ [37] Hongming Li and Yong Fan. Non-rigid image registration using self-supervised fully convolutional networks without training data. In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pages 1075-1078, Washington, DC, USA, April 2018. IEEE.
210
+ [38] Pengyue Zhang, Fusheng Wang, and Yefeng Zheng. Self supervised deep representation learning for fine-grained body part recognition. In 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pages 578-582, Melbourne, Australia, April 2017. IEEE.
211
+ [39] Amir Jamaludin, Timor Kadir, and Andrew Zisserman. Self-supervised learning for spinal mris. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pages 294-302, Cham, 09 2017. Springer. ISBN 978-3-319-67557-2. doi: 10.1007/978-3-319-67558-9_34.
212
+ [40] Wenjia Bai, Chen Chen, Giacomo Tarroni, Jinming Duan, Florian Guitton, Steffen E. Petersen, Yike Guo, Paul M. Matthews, and Daniel Rueckert. Self-supervised learning for cardiac mr image segmentation by anatomical position prediction. In Dinggang Shen, Tianming Liu, Terry M. Peters, Lawrence H. Staib, Caroline Essert, Sean Zhou, Pew-Thian Yap, and Ali Khan, editors, Medical Image Computing and Computer Assisted Intervention – MICCAI 2019, pages 541–549, Cham, 2019. Springer International Publishing. ISBN 978-3-030-32245-8.
213
+
214
+ [41] Ke Yan, Xiaosong Wang, Le Lu, Ling Zhang, Adam P. Harrison, Mohammadhadi Bagheri, and Ronald M. Summers. Deep Lesion Graph in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-Scale Lesion Database, pages 413-435. Springer International Publishing, Cham, 2019. ISBN 978-3-030-13969-8. doi: 10.1007/978-3-030-13969-8_20. URL https://doi.org/10.1007/978-3-030-13969-8_20.
215
+ [42] Tobias Roß, David Zimmerer, Anant Vemuri, Fabian Isensee, Sebastian Bodenstedt, Fabian Both, Philip Kessler, Martin Wagner, Beat Müller, Hannes Kenngott, Stefanie Speidel, Klaus Maier-Hein, and Lena Maier-Hein. Exploiting the potential of unlabeled endoscopic video data with self-supervised learning. International Journal of Computer Assisted Radiology and Surgery, 13, 11 2017. doi: 10.1007/s11548-018-1772-0.
216
+ [43] Hannah Spitzer, Kai Kiwitz, Katrin Amunts, Stefan Harmeling, and Timo Dickscheid. Improving cytoarchitectonic segmentation of human brain areas with self-supervised siamese networks. In Alejandro F. Frangi, Julia A. Schnabel, Christos Davatzikos, Carlos Alberola-López, and Gabor Fichtinger, editors, Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, pages 663–671, Cham, 2018. Springer International Publishing. ISBN 978-3-030-00931-1.
217
+ [44] N. Tajbakhsh, Y. Hu, J. Cao, X. Yan, Y. Xiao, Y. Lu, J. Liang, D. Terzopoulos, and X. Ding. Surrogate supervision for medical image analysis: Effective deep learning from limited quantities of labeled data. In 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pages 1251–1255, 2019.
218
+ [45] Liang Chen, Paul Bentley, Kensaku Mori, Kazunari Misawa, Michitaka Fujiwara, and Daniel Rueckert. Self-supervised learning for medical image analysis using image context restoration. Medical Image Analysis, 58:101539, 2019. ISSN 1361-8415. doi: https://doi.org/10.1016/j.media.2019.101539. URL http://www.sciencedirect.com/science/article/pii/S1361841518304699.
219
+ [46] Jianbo Jiao, Richard Droste, Lior Drukker, Aris T. Papageorghiou, and J. Alison Noble. Self-supervised representation learning for ultrasound video. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pages 1847-1850, 2020.
220
+ [47] Aiham Taleb, Christoph Lippert, Tassilo Klein, and Moin Nabi. Multimodal self-supervised learning for medical image analysis, 2019.
221
+ [48] Maximilian Blendowski, Hannes Nickisch, and Mattias P. Heinrich. How to learn from unlabeled volume data: Self-supervised 3d context feature learning. In Dinggang Shen, Tianming Liu, Terry M. Peters, Lawrence H. Staib, Caroline Essert, Sean Zhou, Pew-Thian Yap, and Ali Khan, editors, Medical Image Computing and Computer Assisted Intervention – MICCAI 2019, pages 649–657, Cham, 2019. Springer International Publishing. ISBN 978-3-030-32226-7.
222
+ [49] Krishna Chaitanya, Ertunc Erdil, Neerav Karani, and Ender Konukoglu. Contrastive learning of global and local features for medical image segmentation with limited annotations, 2020.
223
+ [50] Zongwei Zhou, Vatsal Sodha, Md Mahfuzur Rahman Siddiquee, Ruibin Feng, Nima Tajbakhsh, Michael B. Gotway, and Jianming Liang. Models genesis: Generic autodidactic models for 3d medical image analysis. In Medical Image Computing and Computer Assisted Intervention - MICCAI 2019, pages 384-393, Cham, 2019. Springer International Publishing. ISBN 978-3-030-32251-9.
224
+ [51] Xinrui Zhuang, Yuexiang Li, Yifan Hu, Kai Ma, Yujiu Yang, and Yefeng Zheng. Self-supervised feature learning for 3d medical images by playing a rubik's cube. In Dinggang Shen, Tianming Liu, Terry M. Peters, Lawrence H. Staib, Caroline Essert, Sean Zhou, Pew-Thian Yap, and Ali Khan, editors, Medical Image Computing and Computer Assisted Intervention – MICCAI 2019, pages 420–428, Cham, 2019. Springer International Publishing. ISBN 978-3-030-32251-9.
225
+ [52] Jiuwen Zhu, Yuexiang Li, Yifan Hu, Kai Ma, S. Kevin Zhou, and Yefeng Zheng. Rubik's cube+: A self-supervised feature learning framework for 3d medical image analysis. Medical Image Analysis, 64:101746, 2020. ISSN 1361-8415. doi: https://doi.org/10.1016/j.media.2020.101746. URL http://www.sciencedirect.com/science/article/pii/S1361841520301109.
226
+ [53] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulçehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734, Doha, Qatar, October 2014. Association for Computational Linguistics. doi: 10.3115/v1/D14-1179. URL https://www.aclweb.org/anthology/D14-1179.
227
+
228
+ [54] Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, koray kavukcuoglu, Oriol Vinyals, and Alex Graves. Conditional image generation with pixelcnn decoders. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4790-4798. Curran Associates, Inc., 2016. URL http://papers.nips.cc/paper/6527-conditional-image-generation-with-pixelcnn-decoders.pdf.
229
+ [55] Marijn F. Stollenga, Wonmin Byeon, Marcus Liwicki, and Jürgen Schmidhuber. Parallel multi-dimensional LSTM, with application to fast biomedical volumetric image segmentation. CoRR, abs/1506.07452, 2015. URL http://arxiv.org/abs/1506.07452.
230
+ [56] Aäron Van Den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML'16, page 1747-1756. JMLR.org, 2016.
231
+ [57] Alexey Dosovitskiy, Jost T. Springenberg, Martin Riedmiller, and Thomas Brox. Discriminative unsupervised feature learning with convolutional neural networks. In Advances in Neural Information Processing Systems 27 (NIPS), 2014. URL http://lmb.informatik.uni-freiburg.de/Publications/2014/DB14b.
232
+ [58] Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 2794-2802, 2015.
233
+ [59] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 815-823, 2015.
234
+ [60] Priya Goyal, Dhruv Mahajan, Harikrishna Mulam, and Ishan Misra. Scaling and benchmarking self-supervised visual representation learning. In The IEEE International Conference on Computer Vision (ICCV), pages 6390-6399, October 2019. doi: 10.1109/ICCV.2019.00649.
235
+ [61] Bjoern H. Menze, Andras Jakab, Stefan Bauer, Jayashree Kalpathy-Cramer, Keyvan Farahani, Justin Kirby, Yuliya Burren, and et al. The multimodal brain tumor image segmentation benchmark (brats). IEEE Transactions on Medical Imaging, 34(10):1993-2024, 2015.
236
+ [62] Spyridon Bakas, Hamed Akbari, Aristeidis Sotiras, Michel Billelo, Martin Rozycki, Justin S. Kirby, John B. Freymann, Keyvan Farahani, and Christos Davatzikos. Advancing the cancer genome atlas glioma mri collections with expert segmentation labels and radiomic features. Scientific Data, 4:170117 EP -, 09 2017.
237
+ [63] Cathie Sudlow, John Gallacher, Naomi Allen, Valerie Beral, Paul Burton, John Danesh, Paul Downey, Paul Elliott, Jane Green, Martin Landray, Bette Liu, Paul Matthews, Giok Ong, Jill Pell, Alan Silman, Alan Young, Tim Sprosen, Tim Peakman, and Rory Collins. Uk biobank: An open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLOS Medicine, 12(3):1-10, 03 2015. doi: 10.1371/journal.pmed.1001779. URL https://doi.org/10.1371/journal.pmed.1001779.
238
+ [64] Mark W. Woolrich, Saad Jbabdi, Brian Patenaude, Michael Chappell, Salima Makni, Timothy Behrens, Christian Beckmann, Mark Jenkinson, and Stephen M. Smith. Bayesian analysis of neuroimaging data in fsl. NeuroImage, 45(1, Supplement 1):S173 - S186, 2009. ISSN 1053-8119. doi: https://doi.org/10.1016/j.neuroimage.2008.10.055. URL http://www.sciencedirect.com/science/article/pii/S1053811908012044. Mathematics in Brain Imaging.
239
+ [65] Fabian Isensee, Philipp Kickingereder, Wolfgang Wick, Martin Bendszus, and Klaus H Maier-Hein. No new-net. In International MICCAI Brainlesion Workshop, pages 234–244, Granada, Spain, 2018. Springer.
240
+ [66] Anmol Popli, Manu Agarwal, and G.N. Pillai. Automatic brain tumor segmentation using u-net based 3d fully convolutional network. In Pre-Conference Proceedings of the 7th MICCAI BraTS Challenge, pages 374-382. Springer, 2018.
241
+ [67] Ujjwal Baid, Abhishek Mahajan, Sanjay Talbar, Swapnil Rane, Siddhesh Thakur, Aliasgar Moiyadi, Meenakshi Thakur, and Sudeep Gupta. Gbm segmentation with 3d u-net and survival prediction with radiomics. In International MICCAI Brainlesion Workshop, pages 28-35. Springer, 2018.
242
+ [68] Siddhartha Chandra, Maria Vakalopoulou, Lucas Fidon, Enzo Battistella, Theo Estienne, Roger Sun, Charlotte Robert, Eric Deutch, and Nikos Paragios. Context aware 3-d residual networks for brain tumor segmentation. In International MICCAI Brainlesion Workshop, pages 74-82. Springer, 2018.
243
+
244
+ [69] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Nassir Navab, Joachim Hornegger, William M. Wells, and Alejandro F. Frangi, editors, Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, pages 234–241, Cham, 2015. Springer International Publishing. ISBN 978-3-319-24574-4.
245
+ [70] Amber L. Simpson, Michela Antonelli, Spyridon Bakas, Michel Bilello, Keyvan Farahani, Bram van Ginneken, Annette Kopp-Schneider, Bennett A. Landman, Geert J. S. Litjens, Bjoern H. Menze, Olaf Ronneberger, Ronald M. Summers, Patrick Bilic, Patrick Ferdinand Christ, Richard K. G. Do, Marc Gollub, Jennifer Golia-Pernicka, Stephan Heckers, William R. Jarnagin, Maureen McHugo, Sandy Napel, Eugene Vorontsov, Lena Maier-Hein, and M. Jorge Cardoso. A large annotated medical image dataset for the development and evaluation of segmentation algorithms. CoRR, abs/1902.09063, 2019. URL http://arxiv.org/abs/1902.09063.
246
+ [71] Sayan Manna, Jill Wruble, Samuel Z Maron, Danielle Toussie, Nicholas Voutsinas, Mark Finkelstein, Mario A Cedillo, Jamie Diamond, Corey Eber, Adam Jacobi, Michael Chung, and Adam Bernheim. Covid-19: A multimodality review of radiologic techniques, clinical utility, and imaging features. *Radiology: Cardiothoracic Imaging*, 2(3):e200210, 2020. doi: 10.1148/ryct.2020200210. URL https://doi.org/10.1148/ryct.2020200210.
3dselfsupervisedmethodsformedicalimaging/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99181e363769eff4d17338ba603be9d9cad357d48f1ea6fb7192eb07611150e0
3
+ size 317523
3dselfsupervisedmethodsformedicalimaging/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80a1cd8c798d2ad1a0e4a3281d6795e11376aa0b530e07260ecbe16fb383a3b5
3
+ size 392927
3dshapereconstructionfromvisionandtouch/1f94e634-0571-43de-9d4f-fd6aede79b46_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec1459000a192de02ebbdb33511424ad8df412c8647613d6676d3422975b2df9
3
+ size 76520
3dshapereconstructionfromvisionandtouch/1f94e634-0571-43de-9d4f-fd6aede79b46_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a9a8dd905c70d7b68e7d238cf363085b9adc0e2b683fa4e6a048f40f0f5c8ce
3
+ size 98357
3dshapereconstructionfromvisionandtouch/1f94e634-0571-43de-9d4f-fd6aede79b46_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a22d165bd827c46b55427b9c3788ed60c95226ba47a28a395cfdb714c380261
3
+ size 2327150
3dshapereconstructionfromvisionandtouch/full.md ADDED
@@ -0,0 +1,265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3D Shape Reconstruction from Vision and Touch
2
+
3
+ Edward J. Smith $^{1,2*}$ Roberto Calandra $^{1}$ Adriana Romero $^{1,2}$ Georgia Gkioxari $^{1}$
4
+
5
+ David Meger2
6
+
7
+ Jitendra Malik $^{1,3}$
8
+
9
+ Michal Drozdzal<sup>1</sup>
10
+
11
+ $^{1}$ Facebook AI Research $^{2}$ McGill University $^{3}$ University of California, Berkeley
12
+
13
+ # Abstract
14
+
15
+ When a toddler is presented a new toy, their instinctual behaviour is to pick it up and inspect it with their hand and eyes in tandem, clearly searching over its surface to properly understand what they are playing with. At any instance here, touch provides high fidelity localized information while vision provides complementary global context. However, in 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored. In this paper, we study this problem and present an effective chart-based approach to multi-modal shape understanding which encourages a similar fusion vision and touch information. To do so, we introduce a dataset of simulated touch and vision signals from the interaction between a robotic hand and a large array of 3D objects. Our results show that (1) leveraging both vision and touch signals consistently improves single-modality baselines; (2) our approach outperforms alternative modality fusion methods and strongly benefits from the proposed chart-based structure; (3) the reconstruction quality increases with the number of grasps provided; and (4) the touch information not only enhances the reconstruction at the touch site but also extrapolates to its local neighborhood.
16
+
17
+ # 1 Introduction
18
+
19
+ From an early age children clearly and often loudly demonstrate that they need to both look and touch any new object that has peaked their interest. The instinctual behavior of inspecting with both their eyes and hands in tandem demonstrates the importance of fusing vision and touch information for 3D object understanding. Through machine learning techniques, 3D models of both objects and environments have been built by independent leveraging a variety of perception-based sensors, such as those for vision (e.g. a single RGB image) [58, 19] and touch [72, 66]. However, vision and touch possess a clear complementary nature. On one hand, vision provides a global context for object understanding, but is hindered by occlusions introduced by the object itself and from other objects in the scene. Moreover, vision is also affected by bas-relief [39] and scale/distance ambiguities, as well as slant/tilt angles [3]. On the other hand, touch provides localized 3D shape information, including the point of contact in space as well as high spatial resolution of the shape, but fails quickly when extrapolating without global context or strong priors. Hence, combining both modalities should lead to richer information and better models for 3D understanding. An overview of 3D shape reconstruction from vision and touch is displayed in Figure 1.
20
+
21
+ Visual and haptic modalities have been combined in the literature [2] to learn multi-modal representations of the 3D world, and improve upon subsequent 3D understanding tasks such as object manipulation [41] or any-modal conditional generation [42]. Tactile information has also been used
22
+
23
+ ![](images/6480592f04e3a66d6912913d204864f87bb53d9ec3226cfe023c22af5226bb91.jpg)
24
+ Figure 1: 3D shape understanding from vision and touch includes: (1) shape sensing with a camera and touch sensor, as well as (2) reconstruction algorithm that fuses vision and touch readings. In this paper, we introduce a dataset that captures object sensing and propose a chart-based fusion model for 3D shape prediction from multi-modal inputs. For touch, we realistically simulate an existing vision-based tactile sensor [40].
25
+
26
+ to improve 3D reconstructions in real environments. In particular, [66] leverages vision and touch sequentially, by first using vision to learn 3D object shape priors on simulated data, and subsequently using touch to refine the vision reconstructions when performing sim2real transfer. However, to the best of our knowledge, the complementary fusion of vision (in particular, RGB images) and touch in 3D shape reconstruction remains largely unexplored.
27
+
28
+ In this paper, we focus on this unexplored space, and present an approach that effectively fuses the global and local information provided by visual and haptic modalities to perform 3D shape reconstruction. Inspired by the papier-mâché technique of AtlasNet and its similar works [21, 71, 14] and leveraging recent advances in graph convolutional networks (GCN) [38], we aim to represent a 3D object with a collection of disjoint mesh surface elements, called charts [33], where some charts are reserved for tactile signals and others are used to represent visual information. More precisely, given an RGB image of an object and high spatial resolution tactile (mimicking a DIGIT tactile sensor [40]) and pose information of a grasp, the approach predicts a high fidelity local chart at each touch site and then uses the corresponding vision information to predict global charts which close the surface around them, in a fill-in-the-blank type procedure. As learning from real world robot interactions is resource and time intensive, we have designed a simulator to produce a multi-modal dataset of interactions between a robotic hand and four classes of objects, that can be used to benchmark approaches to 3D shape reconstructions from vision and touch, and help advance the field. Our dataset contains ground truth 3D objects as well as recordings from vision and tactile sensors, such as RGB images and touch readings. Results on the proposed dataset show that by combining visual and tactile cues, we are able to outperform single modality touch and vision baselines. We demonstrate the intuitive property that learning from touch exclusively translates into decreased performance, as the 3D shape reconstruction suffers from poor global context while learning from vision exclusively suffers from occlusions and leads to lower local reconstruction accuracy. However, when combining both modalities, we observe a systematic improvement, suggesting that the proposed approach effectively benefits from vision and touch signals, and surpasses alternative fusion strategies. Moreover, when increasing the number of grasps provided, we are able to further boost the 3D shape reconstruction quality. Finally, due to our model design, the touch readings not only enhance the reconstruction at the touch site but also reduce the error in the neighborhood of touch sensor position. Our main contributions can be summarized as: (1) we introduce a chart-based approach to 3D object reconstruction, leveraging GCNs to combine visual and haptic signals; (2) we build a dataset of simulated haptic object interactions to benchmark 3D shape reconstructions algorithms in this setting; and (3) through an extensive evaluation, we highlight the benefits of the proposed approach, which effectively exploits the complementarity of both modalities. Code for our system is publicly available on a GitHub repository, to ensure reproducible experimental comparison.[2]
29
+
30
+ # 2 Related Work
31
+
32
+ 3D reconstruction from vision. There is a vast literature addressing 3D shape reconstruction from visual signals. Approaches often differ in their input visual signal – e.g. single view RGB image [59, 58, 19, 46], multi-view RGB images [12, 27, 35, 37], and depth images [54, 73] –, and their
33
+
34
+ ![](images/d5433c8b71ca7d89f585b7855f4fa563d38ce89f37a1d9250cfca3ed2b9278fe.jpg)
35
+ Figure 2: Our approach to 3D shape reconstruction combines a single RGB image with 4 touch readings. We start by predicting touch charts from a touch recordings, and projecting the visual signal onto all charts. Then, we feed the charts into an iterative deformation process, where we enforce touch consistency. As a result, we obtain a global prediction of deformed charts.
36
+
37
+ predicted 3D representation – e.g., orientation/3D pose [26, 17], signed distance functions [45], voxels, point clouds, and meshes [32]. Point cloud-based approaches [16, 52, 29, 47], together with voxel-based approaches [9, 59, 63, 68, 69, 70], and their computationally efficient counter-parts [53, 62, 23] have long dominated the deep learning-based 3D reconstruction literature. However, recent advances in graph neural networks [7, 13, 38, 64, 22] have enabled the effective processing and increasing use of surface meshes [36, 65, 34, 30, 25, 58, 8] and hybrid representations [20, 19]. While more complex in their encoding, mesh-based representations benefit greatly from their arbitrary resolution over other more naive representations. Our chosen representation more closely relates to the one of [20], which combines deformed sheets of points to form 3D shapes. However, unlike [20], our proposed approach exploits the neighborhood connectivity of meshes. Finally, 3D reconstruction has also been posed as a shape completion problem [59, 73], where the input is a partial point cloud obtained from depth information and the prediction is the complete version of it.
38
+
39
+ 3D reconstruction from touch. Haptic signals have been exploited to address the shape completion problem [4, 50, 60, 48, 43]. Shape reconstruction has also been tackled from an active acquisition perspective, where successive touches are used to improve the reconstruction outcome and/or reduce the reconstruction uncertainty [5, 44, 72, 31, 15]. Most of these works use point-wise tactile sensors, while in contrast we use a high-dimensional and high-resolution sensor [40] which provide far more detailed local geometry with respect to the object being touched. In addition, these works make use only of proprioceptive and touch information, while we also tackle the problem of integrating global information from visual inputs in a principled manner. For an extensive and more general review on robotic tactile perception, we refer the reader to [43].
40
+
41
+ 3D reconstruction from vision and touch. Many approaches exploiting vision and touch for 3D shape reconstruction rely on depth information [6, 28, 18]. In these works the depth information is represented as a sparse point cloud, augmented with touch points, which is fed to a Gaussian Process to predict implicit shape descriptors (e.g., level sets). Another line of work [66] considers RGB visual signals and uses a deep learning-based approach to produce voxelized 3D shape priors, which are subsequently refined with touch information when transferring the set-up to a real environment. Note that, following 3D shape reconstruction from touch, the previous works are concerned with the active acquisition of grasps. Moreover, [67] uses touch and partial depth maps separately to predict independent voxel models, which are then combined to produce a final prediction. In contrast to these works, we use a 4-fingered robot hand equipped with high-resolution tactile sensors – integrating such high-dimensional inputs is significantly more challenging but also potentially more useful for down-stream robot manipulation tasks.
42
+
43
+ # 3 Global and Local Reconstruction Methods
44
+
45
+ We consider the problem of 3D shape reconstruction from visual and haptic signals and leverage a deep learning approach which deforms disjoint mesh surface elements through a GCN. We assume that visual information is obtained from a single RGB image and haptic information is obtained from vision-based touch sensors with high spatial resolution, such as DIGIT [40]. More precisely, let $V$ denote the RGB image used as a vision signal. Let $T = [R_i, P_i, M_i]_{i=1}^{n_t}$ denote the touch information, where $R_i$ is one touch sensor reading, $P_i$ its corresponding position and rotation in space, $M_i$ a binary
46
+
47
+ mask indicating whether the touch is successful (i.e. the sensor is in contact with the object), and $n_t$ is the number of touch sensors. Let $O$ be the target object shape, represented as a surface mesh. The objective is to learn a function $f_{\theta}$ parameterized by $\theta$ that predicts an object shape reconstruction $\hat{O} = f_{\theta}(V,T)$ such that it best captures the surface defined by $O$ . In our approach, we represent $\hat{O}$ as a set of independent surface elements, $\{C_i\}_{i=1}^{n_c}$ , which we call charts. A chart, $C_i$ , is implemented as a planar, 3D polygon mesh, composed of connected triangular faces, each defined by 3 vertex positions. Figure 3 depicts the structure of a chart, and outlines how a set of charts can be combined to form a closed 3D surface. In the left image we show how a chart is parameterized by a simple planar mesh sheet, in the middle image, a collection of these charts, and in the right image how this collection is combined to form an atlas whose surface emulates that of a pyramid. The decomposition of the surface into charts allows us to have vision-dedicated and touch-dedicated charts, which we fuse by deforming the vision charts around the touch charts.
48
+
49
+ An overview of our approach is highlighted in Figure 2. Touch signals are used to predict touch charts using a pre-trained fully convolutional network, while vision signals are used to define image features over the set of touch and vision charts using perceptual feature pooling [65]. This set of vision and touch charts are then iteratively deformed to obtain the 3D shape reconstruction.
50
+
51
+ # 3.1 Merging vision and touch through chart deformation and tactile consistency
52
+
53
+ We adapt the mesh deformation setup outlined in [65, 58] and employ a GCN to deform our set of vision and touch charts. The GCN learns a function $f_{\theta_1}^{chart}$ parameterized by $\theta_1 \subset \theta$ that predicts the residual position of the vertices within each chart $C_i$ through successive layers. Given a vertex $u$ , each layer $l$ of the GCN updates the vertex's features $H_u^{l-1}$ as:
54
+
55
+ $$
56
+ H _ {u} ^ {l} = \sigma \left(W ^ {l} \left(\sum_ {v \in \mathcal {N} _ {u} \cup \{u \}} \frac {H _ {v} ^ {l - 1}}{\sqrt {\left| \mathcal {N} _ {u} + 1 \right| \left| \mathcal {N} _ {v} + 1 \right|}}\right) + b ^ {l}\right), \tag {1}
57
+ $$
58
+
59
+ where $W^l$ and $b^l$ are the learnable weights and biases of the $l$ -th layer, $\sigma$ is a non-linearity, and $\mathcal{N}_u$ are the neighbors of the vertex $u$ . We initialize each vertex's features $H_u^0$ by concatenating vision features obtained by applying perceptual pooling to the input image, with the $(x,y,z)$ position of the vertex in space, and a binary feature indicating whether the vertex is within a successful touch chart. The function $f_{\theta_1}^{chart}$ is trained to minimize the Chamfer Distance [61] between two sets of points $S$ and $\hat{S}$ sampled from $O$ and $\{C_i\}_{i=1}^{n_c}$ , respectively, over a dataset $\mathcal{D}$ :
60
+
61
+ $$
62
+ \sum_ {i \in \mathcal {D}} \left(\sum_ {p \in S ^ {(i)}} \min _ {\hat {p} \in \hat {S} ^ {(i)}} \| p - \hat {p} \| _ {2} ^ {2} + \sum_ {\hat {p} \in \hat {S} ^ {(i)}} \min _ {p \in S ^ {(i)}} \| p - \hat {p} \| _ {2} ^ {2}\right). \tag {2}
63
+ $$
64
+
65
+ GCNs enforce the exchange of information between vertices, which belong to the same neighborhood at every layer to allow information to propagate throughout the graph (see Figure 9a). Vertices in independent charts, however, will never lie in the same neighborhood, and so no exchange of information between charts can occur. To allow for the exchange of information between vision charts, we initially arrange the vision charts $\{C_i^v\}_{i = 1}^{n_{cv}}$ to form a closed sphere (with no chart overlap). Then, we update each vertex's neighborhood such that vertices on the boundaries of different charts are in each other's neighborhood if they initially touch (see Figure 9b). With this setup, the charts are able to effectively communicate throughout the deformation process, and move freely during the optimization to optimally emulate the target surface. This is advantageous over the standard mesh deformation scheme, which deforms an initial closed mesh, as the prediction is no longer constrained to any fixed surface genus. Moreover, to define the communication between vision and touch charts, and enable the touch charts to influence the position of the vision charts, a reference vertex from the center of each touch chart is elected to lie within the neighborhood of all boundary vertices of vision charts and vice versa (see Figure 9c). With this setup, every vision chart can communicate with other nearby vision charts, as well as the touch charts. This communication scheme allows local touch, and vision information to propagate and fuse over all charts.
66
+
67
+ The chart deformation occurs three times to refine the prediction [65, 58], with the predicted charts being fed back to the input of the GCN before producing a final prediction. In total, 95 vision charts are employed, each possessing 19 vertices and 24 faces each. Touch charts each posses 81 vertices and 128 faces. The GCN updates the positions of the charts, however the initial position of touch charts is enforced after every deformation step and in the final mesh, as their shape is known to be
68
+
69
+ ![](images/404b0163d597f25161e86173ac6cbd8f7a1719c7978aefcd8c6ad097ff7f6e4f.jpg)
70
+ Figure 3: Structure of a chart along with how collections of charts can be deformed to produce a 3D surface, with vertices highlighted in red.
71
+
72
+ ![](images/dd77dc47f3ec13696620c92c97209e886d23a2e69b0c2d85ab2e170c4ca55b76.jpg)
73
+ Figure 4: Communication within and between charts, with vertices highlighted in red, and communication between them highlighted in blue.
74
+
75
+ ![](images/b062ac57fb7b853654af89f1801487ec16e64053935b25d16d84f48d863eaff0.jpg)
76
+ Figure 5: The pipeline for touch chart prediction from simulated touch readings. We start by predicting orthographic depth from touch reading image, then we combine orthographic depth with a point cloud sampled from the sensor plane using surface normal and obtain a predicted point cloud of local surface. To convert the predicted point cloud to touch chart by running an iterative optimization.
77
+
78
+ practically perfect. In this manner, the touch charts are fixed, and the vision charts learn to fill in the remaining surface around them through communication in the GCN. Touch charts corresponding to unsuccessful touches are initialized with simply the position of their finger tips at all vertices. This informs the model about where the fingers are in space, and so also where the object cannot be. Note that the position of unsuccessful touches is not enforced after their deformation.
79
+
80
+ # 3.2 Prediction of local touch charts
81
+
82
+ In this subsection, we describe how to obtain touch charts from touch signals produced using a gel-based sensor with high spatial resolution, such as the DIGIT [40]. To do this, we make note of what gel-based touch sensors truly observe: the impression of a surface through the gel. When untouched, the gel is lying perpendicular to the camera's perspective and at a fixed distance away. When touched by an object, that object's local surface is interpretable by the depth of the impression it makes across the plane of the gel. If we then want to recover this surface from the sensor, we simply need to interpret the touch signal in terms of the depth of the impression across this plane.
83
+
84
+ Figure 5 depicts the pipeline for local structure prediction. Using the finger position information $P$ , we start by defining a grid of points $G_{init} \in \mathbb{R}^{100 \times 100 \times 3}$ of the same size and resolution as the sensor, lying on a perpendicular plane above it, which corresponds physically to the untouched gel of the sensor. Then, we apply a function $f_{\theta_2}^{touch}$ , parameterized by $\theta_2 \subset \theta$ , and represented as a fully convolutional network (U-Net-like model [55]) that takes as input the touch reading signal $R \in \mathbb{R}^{100 \times 100 \times 3}$ and predicts orthographic distance from each point to the surface [57]. This distance corresponds to the depth of the impression across the surface. Next, we transform this prediction into a point cloud $\hat{G}$ as:
85
+
86
+ $$
87
+ \hat {G} = G _ {\text {i n i t}} + f _ {\theta_ {2}} ^ {\text {t o u c h}} (R) * \hat {n}, \tag {3}
88
+ $$
89
+
90
+ where $\hat{n}$ denotes the plane's unit normal. This transforms the grid of points to the shape of the gel across the impression, and so should match the local geometry which deformed it. To learn $\theta_{2}$ , we minimize the Chamfer distance between the predicted point cloud $\hat{G}$ and the ground truth point cloud local to the touch site, $G$ . After predicting the local point cloud $\hat{G}$ , a local touch chart $C$ can be obtained by minimizing the Chamfer distance between points sampled from $C$ and $\hat{G}$ .
91
+
92
+ ![](images/396f927d2d66f2d876c1d4c03281e00b29b4b9701f5c297e3085895706bb83ef.jpg)
93
+ Vision Signals
94
+
95
+ ![](images/a052cdff6d992a1019ac18d582a955f7ab22eb6fd2fa0fcb5dec36778c4bbb68.jpg)
96
+ Touch Signals
97
+ Figure 6: A data point from our dataset displaying an occluded (by hand) and unoccluded RGB image, 4 RGB images representing touch readings and a 3D object surface with touch sites highlighted.
98
+
99
+ ![](images/437de704981228c8acbda0a0940303be5b09f9309747c3b5eb079ba82d200ce2.jpg)
100
+ Object Surface with Touch Sites Highlighted
101
+
102
+ # 4 A Visuotactile Dataset of Object-Grasp Interactions
103
+
104
+ To validate the model described in Section 3, we built a new dataset that aims to capture the interactions between a robotic hand and an object it is touching. We simulate these interactions using a Wonik's Allegro Hand [56] equipped with vision-based touch sensors [40] on each of its three fingers and thumb. We use objects from the 3D Warehouse [1], given its ubiquitous use in computer vision research, and to enable comparisons with previous vision-only work [58, 19, 21, 65].
105
+
106
+ An example instance of data collected from a single grasp is highlighted in Figure 6. We load example objects into the 3D robotics simulator Pybullet [11], place the hand randomly on its surface, and close its fingers attempting to produce contact between the sensors and some point on the object using inverse kinematics. To simulate the touch signal from each sensor, a grid of 10,000 points on the surface of the sensor are projected towards the object using sphere tracing [24] in Pytorch [49], defining a depth map from the sensor's perspective. We then define three lights (pure red, green and blue) around the boundary of the surface the depth map defines and a camera looking down at the surface from above. With this setup we then use the Phong reflection model [51] reflection model to compute the resulting simulated touch signal with resolution $100 \times 100 \times 3$ . This process provides a quality approximation of how vision-based tactile sensors work and upon visual inspections the simulated images look plausible to a human expert. To acquire visual information from this interaction two images are rendered using Blender [10]: (1) a pre-interaction image of the object alone, and (2) an interaction image of an object occluded by the hand grasping it. Both images have resolution $256 \times 256 \times 3$ . Details with respect to the Allegro Hand, how the grasp simulations are performed in Pybullet, the rendering and scene settings in Blender, and the simulation of touch signals can be found in the supplemental materials.
107
+
108
+ The bottle, knife, cellphone, and rifle were chosen from the 3D Warehouse data due to their hand-held nature for a total of 1732 objects. From each grasp an occluded image, an unoccluded image, four simulated touch signals with a mask indicating if each touch was successful, the hand's current pose, a global point cloud of the object's shape, and four local point clouds defining each touch site are recorded. This information is visualized in Figure 6. From each object, five hand-object interactions, or grasps are recorded, and for each grasps at least one successful touch occurs, though on average $62.4\%$ of touches are successful. We split the dataset into training and test sets with approximately a 90:10 ratio. Further details on the design and content of this dataset, together with in-depth statistics and analysis of the content, are provided in the supplemental materials.
109
+
110
+ # 5 Experimental Results
111
+
112
+ In the following section, we describe the experiments designed to validate our approach to 3D reconstruction that leverages both visual and haptic sensory information. We start by outlining our model selection process. Then, using our best model, we validate generalization of the complementary role of vision and touch for 3D shape reconstruction. We follow by examining the effect of increasing number of grasps and then measure the ability of our approach to effectively extrapolate around touch sites. For all experiments, details with respect to experiment design, optimization procedures, hardware used, runtime, and hyper-parameters considered can be found in the supplemental materials.
113
+
114
+ # 5.1 Complementarity of vision and touch: model selection and generalization
115
+
116
+ In the model selection, we compare our approach to three other modality fusion strategies on the validation set: (1) Sphere-based, where the chart-based initialization is replaced with a sphere-based
117
+
118
+ <table><tr><td>Row</td><td>Model</td><td>Vision</td><td>Touch</td><td>Bottle</td><td>Knife</td><td>Cellphone</td><td>Rifle</td><td>Average</td></tr><tr><td>1</td><td>Sphere-based</td><td>U</td><td>✓</td><td>0.775</td><td>0.572</td><td>1.262</td><td>0.643</td><td>0.813</td></tr><tr><td>2</td><td>Chart-based (no copying)</td><td>U</td><td>✓</td><td>0.741</td><td>0.538</td><td>1.141</td><td>0.603</td><td>0.756</td></tr><tr><td>3</td><td>Chart-based (no comm.)</td><td>U</td><td>✓</td><td>0.709</td><td>0.723</td><td>1.222</td><td>0.500</td><td>0.788</td></tr><tr><td>4</td><td>Ours</td><td>U</td><td>✓</td><td>0.741</td><td>0.676</td><td>1.116</td><td>0.473</td><td>0.751</td></tr><tr><td>5</td><td>Sphere-based</td><td>O</td><td>✓</td><td>0.985</td><td>0.692</td><td>1.270</td><td>1.023</td><td>0.992</td></tr><tr><td>6</td><td>Chart-based (no copying)</td><td>O</td><td>✓</td><td>0.953</td><td>0.656</td><td>1.176</td><td>0.892</td><td>0.919</td></tr><tr><td>7</td><td>Chart-based (no comm.)</td><td>O</td><td>✓</td><td>0.954</td><td>0.784</td><td>1.413</td><td>0.904</td><td>1.014</td></tr><tr><td>8</td><td>Ours</td><td>O</td><td>✓</td><td>0.872</td><td>0.685</td><td>1.142</td><td>0.806</td><td>0.876</td></tr><tr><td>9</td><td>Sphere-based</td><td>U</td><td>X</td><td>0.816</td><td>0.561</td><td>1.322</td><td>0.667</td><td>0.841</td></tr><tr><td>10</td><td>Ours</td><td>U</td><td>X</td><td>0.783</td><td>0.703</td><td>1.115</td><td>0.588</td><td>0.797</td></tr><tr><td>11</td><td>Sphere-based</td><td>O</td><td>X</td><td>1.093</td><td>0.719</td><td>1.404</td><td>1.074</td><td>1.072</td></tr><tr><td>12</td><td>Ours</td><td>O</td><td>X</td><td>0.994</td><td>0.831</td><td>1.301</td><td>0.956</td><td>1.020</td></tr></table>
119
+
120
+ Table 1: Model selection. We report the per-class Chamfer distance for the validation set together with average value. Note that O stands for occluded and U for unoccluded
121
+
122
+ ![](images/7e0b4a264188e1e106ce9811d73ef450cae00be688fac888fb220a031bab8efe.jpg)
123
+ Figure 7: Reconstruction results of our method across different input modalities and number of grasps. For vision signal, we use an unoccluded RGB image.
124
+
125
+ one, and the sphere vertices contain a concatenation of projected vision features and touch features extracted from a simple CNN; (2) Chart-based (no copying), where we remove the hard copying of local touch charts in the prediction; and (3) Chart-based (no comm.), where we remove the communication between charts in the GCN and only copy them to the final prediction. For vision only inputs, we compare our model to the sphere-based model only. For all comparisons we consider both the occluded and unoccluded vision signals.
126
+
127
+ The results of model selection are presented in Table 1. We observe that: (1) the sphere-based model suffers from a decrease in average performance when compared to our model (see rows 1 vs 4, 5 vs 8, 9 vs 10, and 11, vs 12), (2) the copying of the local charts to the final prediction leads to performance boosts (see rows 2 vs 4, and 6 vs 8), and (3) the global prediction benefits from communication between touch and vision charts (see rows 3 vs 4, and 7 vs 8). Moreover, as expected, we notice a further decrease in average performance when comparing each unoccluded vision model with their occluded vision counterpart. Finally, for models leveraging vision and touch, we consistently observe an improvement w.r.t. their vision-only baselines, which particularly benefits our full chart-based approach. This improvement is especially noticeable when considering occluded vision, where touch information is able to enhance the reconstruction of sites occluded by the hand touching the object. To further validate our chart-based approach, its performance on single image 3D object reconstruction on [1] was evaluated and compared to an array of popular methods in this setting. The performance of our model here was highly competitive with that of state of the art methods and the results, details, and analysis of this experiment can be found in the supplemental materials.
128
+
129
+ In Table 2, we highlight the generalization of our best model by evaluating it on the test set for five 3D shape reconstruction set ups, namely occluded and unoccluded vision scenarios with and without touch. We notice that the improvement introduced by including the haptic modality generalizes to the test set, for both occluded and unoccluded vision signals. Moreover, we test our approach by removing the vision signal and optimizing it to recover the 3D shape using only tactile signals. In this case, we experience an increased global error of 3.050, compared to the next worse model with 1.074 error, demonstrating its difficulty to extrapolate without global context and further highlighting the locality of the touch signals. Finally, we display some example reconstructions that our best performing fusion models produce in Figure 7.
130
+
131
+ <table><tr><td rowspan="2">Input</td><td colspan="2">Vision (occluded)</td><td colspan="2">Vision (unoccluded)</td><td rowspan="2">Touch only</td></tr><tr><td>Touch</td><td>No Touch</td><td>Touch</td><td>No touch</td></tr><tr><td>Ours</td><td>0.991</td><td>1.074</td><td>0.804</td><td>0.861</td><td>3.050</td></tr></table>
132
+
133
+ <table><tr><td>Class</td><td>Bottle</td><td>Knife</td><td>Cellphone</td><td>Rifle</td></tr><tr><td>C.D.</td><td>0.0099</td><td>0.0136</td><td>0.0072</td><td>0.00749</td></tr></table>
134
+
135
+ Table 2: Test set results for 3D reconstruction tasks with different input modalities: combination of touch readings and occluded or unoccluded vision signal.
136
+
137
+ Table 3: Chamfer distance per class for local point cloud prediction at each touch site.
138
+
139
+ ![](images/25eb7ff5f795b3d9c787996066d9e5a744d92483aa912c14bfad712ee25f7f57.jpg)
140
+ Figure 8: Multi-grasp experiment: we depict the Chamfer distance increase w.r.t. one grasp.
141
+
142
+ ![](images/f4a6cf47008491096512cb8b91ac6259b4acf30a38f3ccf70c4d6206d8c93fd6.jpg)
143
+ Figure 9: Local Chamfer distance at expanding distances around each touch site.
144
+
145
+ # 5.2 Going beyond a single grasp: Multi-grasp experiments
146
+
147
+ We design a further experiment in which we examine the effect of providing increasing number of grasps. The only practical change to the model here is that the number of touch charts increases by 4 for every additional grasp provided. This experiment is conducted using 1 to 5 grasps, and in both the touch-only setting where vision information is excluded and the unoccluded vision and touch setting. The results of this experiment are shown in Figure 8, where we demonstrate that by increasing the number of grasps provided, the reconstruction accuracy significantly improves both with and without the addition of unoccluded vision signals. Reconstruction results across different numbers of grasps can be viewed in Figure 7. From this experiment, it can be concluded that our model gains greater insight into the nature of an object by touching new areas on its surface.
148
+
149
+ # 5.3 From touch sensor readings to local structure prediction
150
+
151
+ Per-class reconstruction results at each touch site using the U-Net-based architecture are highlighted in Table 3. As expected, the reconstructions are practically perfect, when compared to the error incurred over full surfaces (smallest average global error of 0.804). The small errors incurred here are mainly due to the fact that predicted points are selected as belonging to the surface by observing differences in the touch signal and an untouched touch signal. This leads to overshooting and undershooting of the boundary of the touch, and consequently too large or too small predicted surfaces. A reconstruction result from this experiment is displayed in Figure 10.
152
+
153
+ Last, we design an experiment which examines how well the target surface is reconstructed in expanding regions around each touch site. To do this, square rings of points of 1 to 5 times larger dimensions than the touch sensor are projected onto each object's surface at each touch site in order to produce increasingly distant regions around them. Then, the mean distance from these points to the closest point in the corresponding prediction is computed to determine how well these regions have been reconstructed. We perform this experiment with and without touch for both occluded and unoccluded vision models, and the results are shown in Figure 10. As expected, the vision-only models incur approximately the same loss at every plane size while models which leverage touch begin with a drastically lower loss and only slowly increase errors as the plane size increases. This experiment implies that the sharing of information between local and global charts allows for the propagation of touch information to regions around each touch site, suggesting a successful fusion of the complementary signals of vision and touch.
154
+
155
+ ![](images/0d7c2aad2bf4d0a3fd043ab2a4cdb438964b9682220cca9592e68eb5cae21c42.jpg)
156
+ Touch Signal
157
+
158
+ ![](images/735ec59afe6f11bc94adc83e388003108011115eaa984c07d89d218b5fb4bd1c.jpg)
159
+ Predicted Depth
160
+
161
+ ![](images/fe3eff5619d75cd466572f84d2f41456aef832cbf1668af5d7ff49670fb280a7.jpg)
162
+ Predicted Local Points
163
+ Figure 10: Local prediction of structure at a touch site, together with the touch chart.
164
+
165
+ ![](images/1aad546c947816a5ccb07f1b47bc6aa0e083e76891fe009eec45bd0442037ee3.jpg)
166
+ Predicted Touch Chart
167
+
168
+ ![](images/a46201e007ea6a3fbd57412b0458c157cb1779759edfa7ff3749e4818926d6c7.jpg)
169
+ Ground Truth Local Points
170
+
171
+ # 5.4 Limitations
172
+
173
+ There exist a few limitations of this method which are worth addressing. The first noteworthy limitation is that in the final object predictions, charts sometimes poorly overlap creating a noisy boundary, as opposed to a more attractive smooth surface. This can be credited to the lack of natural regularizers for encouraging smoothness, which exist in more constrained methods [65, 58, 36]. This property can be observed in Figure 7. A second, and somewhat related limitation of the method is that charts are not forced to form a continuous connected surface. This occasionally leads to charts lying disconnected for the mainly body of the predicted surface. However, from our qualitative evaluation of 100 predicted objects in the test set, this only occurs $5\%$ of the time, and has very little effect on the quality of the predictions. A final limitation is that our method requires full 3D scene information while training, and at test time requires full hand pose information. While trivial to acquire in our simulated environment, for application to real environments this data requirement is somewhat unrealistic.
174
+
175
+ # 6 Conclusion
176
+
177
+ In this paper, we explored the problem of 3D shape reconstruction from vision and touch. To do so, we introduced a dataset of simulated touch and vision signals, and proposed a chart-based approach that effectively exploits the complementary nature of both modalities, namely, the high fidelity local information from touch and the global information from vision. While some limitations to our approach exist such as potentially noisy or incomplete surface predictions, our results consistently highlight the benefit of combining both modalities to improve upon single modality baselines, and show the potential of using a chart-based approach to combine vision and touch signal in a principled way. The benefit of fusing vision and touch is further emphasized by the ability of our model to gracefully extrapolate around touch sites, and by the improved reconstruction accuracy when providing an increasing number of grasps, which suggests that the active sensing of visual and touch signals is a promising avenue to improve 3D shape reconstruction.
178
+
179
+ # Broader Impact
180
+
181
+ Our contributions allow for improved understanding of the three dimensional world in which we all live. The impact of this work lies mainly in the field of 3D object understanding, such as better 3D reconstruction of objects in simulated environments as well as potential improvements in shape understanding in real world robot-object manipulation. There are many benefits to using improved 3D object understanding, and it may prove especially useful for the fields of automation, robotic, computer graphics and augmented and virtual reality. Failures of these models could arise if automation tools are not properly introduced, and biases are not properly addressed. In particular, these models could result in poor recognition of 3D objects in diverse contexts, as has already been shown for 2D recognition systems. On the research side, to mitigate these risks, we encourage further investigation to outline the performance of 3D understanding systems in the wild in a diverse set of contexts and geographical locations, and to mitigate the associated performance drops.
182
+
183
+ # 7 Acknowledgments
184
+
185
+ We would like to acknowledge the NSERC Canadian Robotics Network, the Natural Sciences and Engineering Research Council, and the Fonds de recherche du Québec – Nature et Technologies for their funding support, as granted to the McGill University authors. We would also like to thank Scott Fujimoto and Shaoxiong Wang for their helpful feedback.
186
+
187
+ # References
188
+
189
+ [1] 3D Warehouse. https://3dwarehouse.sketchup.com/. Accessed: 2019-11-15.
190
+ [2] Peter Allen. Surface descriptions from vision and touch. In IEEE International Conference on Robotics and Automation (ICRA), volume 1, pages 394-397. IEEE, 1984.
191
+ [3] P. N. Belhumeur, D. J. Kriegman, and A. L. Yuille. The bas-relief ambiguity. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 1060-1066, 1997.
192
+ [4] A. Bierbaum, I. Gubarev, and R. Dillmann. Robust shape recovery for sparse contact location and normal data from haptic exploration. In 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3200-3205, 2008.
193
+ [5] Alexander Bierbaum, Matthias Rambow, Tamim Asfour, and Rudiger Dillmann. A potential field approach to dexterous tactile exploration of unknown objects. In IEEE-RAS International Conference on Humanoid Robots (Humanoids), pages 360-366. IEEE, 2008.
194
+ [6] Mårten Björkman, Yasemin Békiroglu, Virgile Högman, and Danica Kragic. Enhancing visual perception of shape through tactile glances. In IEEE International Conference on Intelligent Robots and Systems (IROS), 11 2013.
195
+ [7] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann Lecun. Spectral networks and locally connected networks on graphs. In International Conference on Learning Representations (ICLR), 2014.
196
+ [8] Wenzheng Chen, Huan Ling, Jun Gao, Edward Smith, Jaakko Lehtinen, Alec Jacobson, and Sanja Fidler. Learning to predict 3d objects with an interpolation-based differentiable renderer. In Advances in Neural Information Processing Systems, pages 9609–9619, 2019.
197
+ [9] Christopher B Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, and Silvio Savarese. 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In Proceedings of the European Conference on Computer Vision (ECCV), pages 628-644. Springer, 2016.
198
+ [10] Blender Online Community. Blender - a 3D modelling and rendering package. Blender Foundation, Stichting Blender Foundation, Amsterdam, 2018.
199
+ [11] Erwin Coumans and Yunfei Bai. Pybullet, a python module for physics simulation for games, robotics and machine learning. *GitHub repository*, 2016.
200
+ [12] A. Dame, V. A. Prisacariu, C. Y. Ren, and I. Reid. Dense reconstruction using 3d object shape priors. In 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1288-1295, 2013.
201
+ [13] Michael Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, pages 3844-3852, USA, 2016. Curran Associates Inc.
202
+ [14] Theo Deprelle, Thibault Groueix, Matthew Fisher, Vladimir Kim, Bryan Russell, and Mathieu Aubry. Learning elementary structures for 3d shape generation and matching. In Advances in Neural Information Processing Systems, pages 7435-7445, 2019.
203
+ [15] Danny Driess, Peter Englert, and Marc Toussaint. Active learning with query paths for tactile object shape exploration. In n IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017.
204
+ [16] Haoqiang Fan, Hao Su, and Leonidas Guibas. A point set generation network for 3d object reconstruction from a single image. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 38, 2017.
205
+ [17] D. F. Fouhey, A. Gupta, and M. Hebert. Data-driven 3d primitives for single image understanding. In 2013 IEEE International Conference on Computer Vision (CVPR), pages 3392-3399, 2013.
206
+
207
+ [18] Gabriela Zarzar Gandler, Carl Henrik Ek, Mårten Björkman, Rustam Stolkin, and Yasemin Bekiroglu. Object shape estimation and modeling, based on sparse gaussian process implicit surfaces, combining visual data and tactile exploration. Robotics and Autonomous Systems, 126:103433, 2020.
208
+ [19] Georgia Gkioxari, Jitendra Malik, and Justin Johnson. Mesh r-cnn. IEEE International Conference on Computer Vision (ICCV), 2019.
209
+ [20] Thibault Groueix, Matthew Fisher, Vladimir G Kim, Bryan C Russell, and Mathieu Aubry. 3d-coded: 3d correspondences by deep deformation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 230-246, 2018.
210
+ [21] Thibault Groueix, Matthew Fisher, Vladimir G Kim, Bryan C Russell, and Mathieu Aubry. A papier-mâché approach to learning 3d surface generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 216-224, 2018.
211
+ [22] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems (NeurIPS), pages 1024-1034, 2017.
212
+ [23] Christian Hane, Shubham Tulsiani, and Jitendra Malik. Hierarchical surface prediction for 3d object reconstruction. arXiv preprint arXiv:1704.00710, 2017.
213
+ [24] John C Hart. Sphere tracing: A geometric method for the antialiased ray tracing of implicit surfaces. The Visual Computer, 12(10):527-545, 1996.
214
+ [25] Paul Henderson and Vittorio Ferrari. Learning to generate and reconstruct 3d meshes with only 2d supervision. arXiv preprint arXiv:1807.09259, 2018.
215
+ [26] D. Hoiem, A. A. Efros, and M. Hebert. Geometric context from a single image. In IEEE International Conference on Computer Vision (ICCV), volume 1, pages 654-661 Vol. 1, 2005.
216
+ [27] C. Hane, N. Savinov, and M. Pollefeys. Class specific 3d object shape priors using surface normals. In 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 652-659, 2014.
217
+ [28] Jarmo Ilonen, Jeannette Bohg, and Ville Kyrki. Three-dimensional object reconstruction of symmetric objects by fusing visual and tactile sensing. The International Journal of Robotics Research, 33(2):321-341, 2014.
218
+ [29] Eldar Insafutdinov and Alexey Dosovitskiy. Unsupervised learning of shape and pose with differentiable point clouds. arXiv preprint arXiv:1810.09381, 2018.
219
+ [30] Dominic Jack, Jhony K Pontes, Sridha Sridharan, Clinton Fookes, Sareh Shirazi, Frederic Maire, and Anders Eriksson. Learning free-form deformations for 3d object reconstruction. arXiv preprint arXiv:1803.10932, 2018.
220
+ [31] N. Jamali, C. Ciliberto, L. Rosasco, and L. Natale. Active perception: Building objects' models using tactile exploration. In IEEE-RAS International Conference on Humanoid Robots (Humanoids), pages 179-185, Nov 2016.
221
+ [32] Krishna Murthy Jatavallabhula, Edward Smith, Jean-Francois Lafleche, Clement Fuji Tsang, Artem Rozantsev, Wenzheng Chen, and Tommy Xiang. Kaolin: A pytorch library for accelerating 3d deep learning research. arXiv preprint arXiv:1911.05063, 2019.
222
+ [33] Jürgen Jost. Riemannian Geometry and Geometric Analysis, page 1. Springer, sixth edition, 2011.
223
+ [34] Angjoo Kanazawa, Shubham Tulsiani, Alexei A Efros, and Jitendra Malik. Learning category-specific mesh reconstruction from image collections. arXiv preprint arXiv:1803.07549, 2018.
224
+ [35] Abhishek Kar, Christian Hane, and Jitendra Malik. Learning a multi-view stereo machine. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems (NeurIPS), pages 365-376. Curran Associates, Inc., 2017.
225
+
226
+ [36] Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. Neural 3d mesh renderer. arXiv preprint arXiv:1711.07566, 2017.
227
+ [37] Alex Kendall, Hayk Martirosyan, Saumitro Dasgupta, and Peter Henry. End-to-end learning of geometry and context for deep stereo regression. In IEEE International Conference on Computer Vision (ICCV), pages 66-75. IEEE Computer Society, 2017.
228
+ [38] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In Proceedings of the 5th International Conference on Learning Representations (ICLR), 2017.
229
+ [39] Jan Koenderink, Andrea Doorn, and Astrid Kappers. Surface perception in picture. Percept. Psychophys., 52:487-496, 09 1992.
230
+ [40] Mike Lambeta, Po-Wei Chou, Stephen Tian, Brian Yang, Benjamin Maloon, Victoria Rose Most, Dave Stroud, Raymond Santos, Ahmad Byagowski, Gregg Kammerer, Dinesh Jayaraman, and Roberto Calandra. DIGIT: A novel design for a low-cost compact high-resolution tactile sensor with application to in-hand manipulation. IEEE Robotics and Automation Letters (RA-L), 5(3):3838-3845, 2020.
231
+ [41] M. A. Lee, Y. Zhu, K. Srinivasan, P. Shah, S. Savarese, L. Fei-Fei, A. Garg, and J. Bohg. Making sense of vision and touch: Self-supervised learning of multimodal representations for contact-rich tasks. In 2019 International Conference on Robotics and Automation (ICRA), pages 8943–8950, 2019.
232
+ [42] Jae Hyun Lim, Pedro O. Pinheiro, Negar Rostamzadeh, Chris Pal, and Sungjin Ahn. Neural multisensory scene inference. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 8994-9004, 2019.
233
+ [43] Shan Luo, Joao Bimbo, Ravinder Dahiya, and Hongbin Liu. Robotic Tactile Perception of Object Properties: A Review. arXiv e-prints, page arXiv:1711.03810, November 2017.
234
+ [44] Uriel Martinez-Hernandez, Tony Dodd, Lorenzo Natale, Giorgio Metta, Tony Prescott, and Nathan Lepora. Active contour following to explore object shape with robot touch. In 2013 World Haptics Conference, WHC 2013, 04 2013.
235
+ [45] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2019.
236
+ [46] J Krishna Murthy, GV Sai Krishna, Falak Chhaya, and K Madhava Krishna. Reconstructing vehicles from a single image: Shape priors for road scene understanding. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 724-731. IEEE, 2017.
237
+ [47] David Novotny, Diane Larlus, and Andrea Vedaldi. Learning 3d object categories by looking around them. In IEEE International Conference on Computer Vision (ICCV), pages 5228-5237. IEEE, 2017.
238
+ [48] Simon Ottenhaus, Martin Miller, David Schiebener, Nikolaus Vahrenkamp, and Tamim Asfour. Local implicit surface estimation for haptic exploration. In IEEE-RAS International Conference on Humanoid Robots (Humanoids), pages 850-856, 11 2016.
239
+ [49] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc., 2019.
240
+ [50] Z. Pezzementi, C. Reyda, and G. D. Hager. Object mapping, recognition, and localization from tactile geometry. In 2011 IEEE International Conference on Robotics and Automation, pages 5942-5948, 2011.
241
+
242
+ [51] Bui Tuong Phong. Illumination for computer generated pictures. Communications of the ACM, 18(6):311-317, 1975.
243
+ [52] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1(2):4, 2017.
244
+ [53] Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger. Octnet: Learning deep 3d representations at high resolutions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6620-6629, 2017.
245
+ [54] J. Rock, T. Gupta, J. Thorsen, J. Gwak, D. Shin, and D. Hoiem. Completing 3d object shape from one depth image. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2484-2493, 2015.
246
+ [55] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234-241. Springer, 2015.
247
+ [56] SimLab. Allegro hand overview, 2016. [Online; accessed 25-May-2020].
248
+ [57] Edward J. Smith, Scott Fujimoto, and David Meger. Multi-view silhouette and depth decomposition for high resolution 3d object representation. In Advances in Neural Information Processing Systems, pages 6479-6489, 2018.
249
+ [58] Edward J. Smith, Scott Fujimoto, Adriana Romero, and David Meger. Geometrics: Exploiting geometric structure for graph-encoded objects. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, ICML 2019, volume 97 of Proceedings of Machine Learning Research, pages 5866-5876. PMLR, 2019.
250
+ [59] Edward J Smith and David Meger. Improved adversarial systems for 3d object generation and reconstruction. In Conference on Robot Learning (CoRL), pages 87-96, 2017.
251
+ [60] Nicolas Sommer, Miao Li, and Aude Billard. Bimanual compliant tactile exploration for grasping unknown objects. Proceedings - IEEE International Conference on Robotics and Automation, pages 6400-6407, 09 2014.
252
+ [61] Xingyuan Sun, Jiajun Wu, Xiuming Zhang, Zhoutong Zhang, Chengkai Zhang, Tianfan Xue, Joshua B. Tenenbaum, and William T. Freeman. Pix3d: Dataset and methods for single-image 3d shape modeling. CoRR, abs/1804.04610, 2018.
253
+ [62] Maxim Tatarchenko, Alexey Dosovitskiy, and Thomas Brox. Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2088-2096, 2017.
254
+ [63] Shubham Tulsiani, Tinghui Zhou, Alexei A Efros, and Jitendra Malik. Multi-view supervision for single-view reconstruction via differentiable ray consistency. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 209-217. IEEE, 2017.
255
+ [64] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph Attention Networks. International Conference on Learning Representations (ICLR), 2018.
256
+ [65] Nanyang Wang, Yinda Zhang, Zhuwen Li, Yanwei Fu, Wei Liu, and Yu-Gang Jiang. Pixel2mesh: Generating 3d mesh models from single rgb images. arXiv preprint arXiv:1804.01654, 2018.
257
+ [66] S. Wang, J. Wu, X. Sun, W. Yuan, W. T. Freeman, J. B. Tenenbaum, and E. H. Adelson. 3d shape perception from monocular vision, touch, and shape priors. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1606-1613, Oct 2018.
258
+ [67] David Watkins-Valls, Jacob Varley, and Peter Allen. Multi-modal geometric learning for grasping and manipulation. In 2019 International Conference on Robotics and Automation (ICRA), pages 7339-7345. IEEE, 2019.
259
+
260
+ [68] Jiajun Wu, Yifan Wang, Tianfan Xue, Xingyuan Sun, William T Freeman, and Joshua B Tenenbaum. MarrNet: 3D Shape Reconstruction via 2.5D Sketches. In Advances In Neural Information Processing Systems (NeurIPS), 2017.
261
+ [69] Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T Freeman, and Joshua B Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Advances in Neural Information Processing Systems (NeurIPS), pages 82-90, 2016.
262
+ [70] Jiajun Wu, Chengkai Zhang, Xiuming Zhang, Zhoutong Zhang, William T Freeman, and Joshua B Tenenbaum. Learning shape priors for single-view 3d completion and reconstruction. In Proceedings of the European Conference on Computer Vision (ECCV), pages 673-691. Springer, 2018.
263
+ [71] Yaoqing Yang, Chen Feng, Yiru Shen, and Dong Tian. Foldingnet: Point cloud auto-encoder via deep grid deformation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 206–215, 2018.
264
+ [72] Zhengkun Yi, Roberto Calandra, Filipe Fernandes Veiga, Herke van Hoof, Tucker Hermans, Yilei Zhang, and Jan Peters. Active tactile object exploration with Gaussian processes. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4925-4930, 2016.
265
+ [73] Wentao Yuan, Tejas Khot, David Held, Christoph Mertz, and Martial Hebert. Pcn: Point completion network. In 2018 International Conference on 3D Vision (3DV), pages 728-737, 2018.
3dshapereconstructionfromvisionandtouch/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:748f15e3803e53c020831a003ad6d237af57df63985f52d78e39fe4edec38ac8
3
+ size 343736
3dshapereconstructionfromvisionandtouch/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c814aab4551542698d9bf1ccff364231a1f52b32f4ca2b8f833080b3cffd66da
3
+ size 362379
abanditlearningalgorithmandapplicationstoauctiondesign/63269d4e-ee93-47ec-8af5-376729555fe9_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81b037b3fde41508fae1caad57435b3b530231f33a9546fe4595d8d21a72b9fb
3
+ size 68271
abanditlearningalgorithmandapplicationstoauctiondesign/63269d4e-ee93-47ec-8af5-376729555fe9_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1abc75ec2315dd9e9d31ac9b7d8a4598f98791c07d80df78b0a029129527dd5f
3
+ size 83748
abanditlearningalgorithmandapplicationstoauctiondesign/63269d4e-ee93-47ec-8af5-376729555fe9_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9bf6496c8ec56911324ee51af8e64eb3f4c258b1437e498828b3596de73417d
3
+ size 347904
abanditlearningalgorithmandapplicationstoauctiondesign/full.md ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Bandit Learning Algorithm and Applications to Auction Design
2
+
3
+ Nguyen Kim Thang
4
+
5
+ IBISC, Univ. Evry, University Paris-Saclay, France
6
+
7
+ kimthang.nguyen@univ-evry.fr
8
+
9
+ # Abstract
10
+
11
+ We consider online bandit learning in which at every time step, an algorithm has to make a decision and then observe only its reward. The goal is to design efficient (polynomial-time) algorithms that achieve a total reward approximately close to that of the best fixed decision in hindsight. In this paper, we introduce a new notion of $(\lambda, \mu)$ -concave functions and present a bandit learning algorithm that achieves a performance guarantee which is characterized as a function of the concavity parameters $\lambda$ and $\mu$ . The algorithm is based on the mirror descent algorithm in which the update directions follow the gradient of the multilinear extensions of the reward functions. The regret bound induced by our algorithm is $\widetilde{O}(\sqrt{T})$ which is nearly optimal.
12
+
13
+ We apply our algorithm to auction design, specifically to welfare maximization, revenue maximization, and no-envy learning in auctions. In welfare maximization, we show that a version of fictitious play in smooth auctions guarantees a competitive regret bound which is determined by the smooth parameters. In revenue maximization, we consider the simultaneous second-price auctions with reserve prices in multi-parameter environments. We give a bandit algorithm which achieves the total revenue at least $1/2$ times that of the best fixed reserve prices in hindsight. In no-envy learning, we study the bandit item selection problem where the player valuation is submodular and provide an efficient $1/2$ -approximation no-envy algorithm.
14
+
15
+ # 1 Introduction
16
+
17
+ In online learning, the goal is to design algorithms which are robust in dynamically evolving environments by applying optimization methods that learn from experience and observations. Characterizing conditions, or in general discovering regularity properties, under which efficient online learning algorithms with performance guarantee exist is a major research agenda in online learning. In this paper, we consider this line of research and present a new regularity condition for the design of efficient online learning algorithms. Subsequently, we establish the applicability of our approach in auction design.
18
+
19
+ General online problem. At each time step $t = 1,2,\ldots$ , an algorithm chooses $\pmb{x}^t\in [0,1]^n$ . After the algorithm has committed to its choice, an adversary selects a function $f^{t}:[0,1]^{n}\to [0,1]$ that subsequently induces the reward of $f^{t}(\pmb{x}^{t})$ for the algorithm. In the problem, we are interested in the bandit setting that at every time $t$ , the algorithm observes only its reward $f^{t}(\pmb{x}^{t})$ . The goal is to efficiently achieve the total gain approximately close to that obtained by the best decision in hindsight.
20
+
21
+ We consider the following notion of regret which measures the performance of algorithms. An algorithm is $(r,R(T))$ -regret if for arbitrary total number of time steps $T$ and for any sequence of reward functions $f^1,\ldots ,f^T\in \mathcal{F},\sum_{t = 1}^{T}f^{t}(\boldsymbol{x}^{t})\geq r\cdot \max_{\boldsymbol {x}\in [0,1]^{n}}\sum_{t = 1}^{T}f^{t}(\boldsymbol {x}) - R(T)$ . We also
22
+
23
+ say that the algorithm achieves a $r$ -regret bound of $R(T)$ . In general, one seeks algorithms with $(r,R(T))$ -regret such that $r > 0$ is as large as possible (ideally, close to 1) and $R(T)$ is sublinear as a function of $T$ , i.e., $R(T) = o(T)$ . We also call $r$ as the approximation ratio of the algorithm.
24
+
25
+ We introduce a regularity notion that generalizes the notion of concavity. The new notion, while simple, is crucial in our framework in order to design efficient online learning algorithms with performance guarantee.
26
+
27
+ Definition 1 A function $F$ is $(\lambda, \mu)$ -concave if for every vectors $\mathbf{x}$ and $\mathbf{x}^*$ ,
28
+
29
+ $$
30
+ \langle \nabla F (\boldsymbol {x}), \boldsymbol {x} ^ {*} - \boldsymbol {x} \rangle \geq \lambda F \left(\boldsymbol {x} ^ {*}\right) - \mu F (\boldsymbol {x}) \tag {1}
31
+ $$
32
+
33
+ Note that a concave function is $(1,1)$ -concave. A non-trivial example, shown in the paper, is the $(1,2)$ -concavity of the multilinear extension of a monotone submodular function.
34
+
35
+ # 1.1 The Main Algorithm
36
+
37
+ We aim to design a bandit algorithm for the general online problem with emphasis on auctions. Bandit algorithms have been widely studied in online convex optimization [17] but in the context of auction design, standard approaches have various limits. The main issues are: (1) the non-concavity of the (reward) functions, and (2) the intrinsic nature of the bandit setting (only the value $f^t(\boldsymbol{x}^t)$ is observed). We overcome these issues by the approach which consists of lifting the search space (of the solutions) and the reward functions to a higher dimension space and considering the multilinear extensions of the reward functions in that space. Concretely, we consider a sufficiently dense lattice $\mathcal{L}$ in $[0,1]^n$ such that every point in $[0,1]^n$ can be approximated by a point in $\mathcal{L}$ . Then, we lift all lattice points in $\mathcal{L}$ to vertices of a hypercube in a high dimension space. Subsequently, we consider the multilinear extensions of reward functions $f^t$ in that space. This procedure enables several useful properties, in particular the $(\cdot, \cdot)$ -concavity, that hold for the multilinear extensions but not necessarily for the original reward functions. (For example, the multilinear extension of a monotone submodular function is always $(1,2)$ -concave but the submodular function is not.) The introduction of $(\cdot, \cdot)$ -concavity and the use of multilinear extensions constitute the novel points in our approach compared to the previous ones. This allows us to bound the regret of our algorithm which is based on the classic mirror descent with respect to the gradients of the multilinear extensions.
38
+
39
+ Informal Theorem 1 Let $f^t: [0,1]^n \to [0,1]$ be the reward function at time $1 \leq t \leq T$ and let $F^t$ be the multilinear extension of the discretization of $f^t$ on a lattice $\mathcal{L}$ . Assume that $f^t$ 's are $L$ -Lipschitz and $F^t$ 's are $(\lambda, \mu)$ -concave. Then, there exists a bandit algorithm achieving
40
+
41
+ $$
42
+ \sum_ {t = 1} ^ {T} \mathbb {E} \left[ f ^ {t} \left(\boldsymbol {x} ^ {t}\right) \right] \geq \frac {\lambda}{\mu} \cdot \max _ {\boldsymbol {x} \in [ 0, 1 ] ^ {n}} \sum_ {t = 1} ^ {T} f ^ {t} (\boldsymbol {x}) - O \left(\max \{\lambda / \mu , 1 \} L n ^ {3 / 2} (\log T) ^ {3 / 2} (\log \log T) \sqrt {T}\right).
43
+ $$
44
+
45
+ The formal statement corresponding to the above informal theorem is Theorem 2. By this theorem, determining the performance guarantee is reduced to computing the concave parameters. Moreover, the regret bound of $\widetilde{O}(\sqrt{T})$ is nearly optimal that has been proved in the context of online convex optimization (for concave functions, i.e., (1, 1)-concave functions). The approach is convenient to derive bandit learning algorithms in the context of auction design as shown in the applications.
46
+
47
+ # 1.2 Applications to Auction Design
48
+
49
+ In a general auction design setting, each player $i$ has a valuation (or type) $v_{i}$ and a set of actions $\mathcal{A}_i$ for $1\leq i\leq n$ . Given an action profile $\pmb {a} = (a_{1},\dots ,a_{n})$ consisting of actions chosen by players, the auctioneer decides an allocation $\pmb {o}(\pmb {a})$ and a payment $p_i(o(a))$ for each player $i$ . Note that for a fixed auction $\pmb{o}$ , the outcome $\pmb{o}(\pmb {a})$ of the game is completely determined by the action profile $\pmb{a}$ . Then, the utility of player $i$ with valuation $v_{i}$ , following the quasi-linear utility model, is defined as $u_{i}(\pmb {o}(\pmb {a});v_{i}) = v_{i}(\pmb {o}(\pmb {a})) - p_{i}(\pmb {o}(\pmb {a}))$ . The social welfare of an auction is defined as the total utility of all participants (the players and the auctioneer): $\mathrm{Sw}(o(a);v) = \sum_{i = 1}^{n}u_{i}(o(a);v_{i}) + \sum_{i = 1}^{n}p_{i}(a)$ . The total revenue of the auction is $\mathrm{REV}(o(a),v) = \sum_{i = 1}^{n}p_{i}(o(a))$ . When $\pmb{o}$ is clear in the context, we simply write $v_{i}(\pmb {a}),u_{i}(\pmb {a};v_{i}),p_{i}(\pmb {a}),\mathrm{Sw}(\pmb {a};\pmb {v}),\mathrm{REV}(\pmb {a},\pmb {v})$ instead of $v_{i}(o(a)),u_{i}(o(a);v_{i}),p_{i}(o(a)),\mathrm{SW}(o(a);v),\mathrm{REV}(o(a),v)$ , respectively. In the paper, we consider two standard objectives: welfare maximization and revenue maximization. Note that in revenue maximization, we call players as bidders.
50
+
51
+ # 1.2.1 Fictitious Play in Smooth Auctions
52
+
53
+ We consider adaptive dynamics in auctions. In the setting, there is an underlying auction $\mathcal{O}$ and there are $n$ players, each player $i$ has a set of actions $\mathcal{A}_i$ and a valuation function $v_i$ taking values in [0, 1] (by normalization). In every time step $1 \leq t \leq T$ , each player $i$ selects a strategy which is a distribution in $\Delta(\mathcal{A}_i)$ according to some given adaptive dynamic. After all players have committed their strategies, which result in a strategy profile $\sigma^t \in \Delta(\mathcal{A})$ , the auction induces a social welfare $\mathrm{Sw}(o, \sigma^t) := \mathbb{E}_{a \sim \sigma^t}[\mathrm{Sw}(o(a); v)]$ . In this setting, we study the total welfare achieved by the given adaptive dynamic comparing to the optimal welfare. This problem can be cast in the online optimization framework in which at time step $t$ , the player strategy profile corresponds to the decision of the algorithm and subsequently, the gain of the algorithm is the social welfare induced by the auction w.r.t the strategy profile.
54
+
55
+ Smooth auctions is an important class of auctions in welfare maximization. The smoothness notion has been introduced [32, 28] in order to characterize the efficiency of (Bayes-Nash) equilibria of auctions. It has been shown that several auctions in widely studied settings are smooth; and many proof techniques analyzing equilibrium efficiency can be reduced to the smooth argument.
56
+
57
+ Definition 2 ([32, 28]) For parameters $\lambda, \mu \geq 0$ , an auction is $(\lambda, \mu)$ -smooth if for every valuation profile $\mathbf{v} = (v_{1}, \ldots, v_{n})$ , there exist action distributions $\overline{D}_{1}(\mathbf{v}), \ldots, \overline{D}_{n}(\mathbf{v})$ over $\mathcal{A}_{1}, \ldots, \mathcal{A}_{n}$ such that, for every action profile $\mathbf{a}$ , $\sum_{i=1}^{n} \mathbb{E}_{\overline{a}_{i} \sim \overline{D}_{i}(\mathbf{v})} [u_{i}(\overline{a}_{i}, \mathbf{a}_{-i}; v_{i})] \geq \lambda \cdot \mathrm{Sw}(\overline{\mathbf{a}}; \mathbf{v}) - \mu \cdot \mathrm{Sw}(\mathbf{a}; \mathbf{v})$ where $\mathbf{a}_{-i}$ is the action profile similar to $\mathbf{a}$ without player $i$ .
58
+
59
+ It has been proved that if an auction is $(\lambda, \mu)$ -smooth then every Bayes-Nash equilibrium of the auction has expected welfare at least $\lambda / \mu$ fraction of the optimal auction [28, 32]. Moreover, the smoothness framework does extend to individually-vanishing-regret dynamics. A sequence of actions profiles $\pmb{a}^1, \pmb{a}^2, \dots$ , is an individually-vanishing-regret sequence if for every player $i$ and action $a_i'$ , $\lim_{T \to \infty} \frac{1}{T} \sum_{t=1}^{T} \left[ u_i(a_i', \pmb{a}_{-i}^t; v_i) - u_i(\pmb{a}^t; v_i) \right] \leq 0$ .
60
+
61
+ However, several interesting dynamics are not guaranteed to have the individually-vanishing-regret property. In a recent survey, Roughgarden et al. [30] have raised a question whether adaptive dynamics without the individually-vanishing-regret condition can achieve approximate optimal welfare. Among others, fictitious play [5] is an interesting, widely-studied dynamic which attracts a significant attention in the community.
62
+
63
+ In the paper, we consider a version of fictitious play in smooth auctions, namely Perturbed Discrete Time Fictitious Play (PDTFP). In general, it is not known whether this dynamic has individually-vanishing-regret. Despite that fact, using our framework, we prove that given an offline $(\lambda ,\mu)$ -smooth auction, PDTFP dynamic achieves a $\lambda /(1 + \mu)$ fraction of the optimal welfare.
64
+
65
+ Informal Theorem 2 If the underlying auction $\mathbf{o}$ is a $(\lambda, \mu)$ -smooth then the PDTFP dynamic achieves $\left(\frac{\lambda}{1 + \mu}, R(T)\right)$ -regret where $R(T) = O\left(\frac{\sqrt{T}}{1 + \mu}\right)$ .
66
+
67
+ # 1.2.2 Revenue maximization in Multi-Dimensional Environments
68
+
69
+ We consider online simultaneous second-price auctions with reserve prices in multi-dimensional environments. In the setting, there are $n$ bidders and $m$ items to be sold to these bidders. At every time step $t = 1,2,\dots,T$ , the auctioneer selects reserve prices $r_i^t = (r_{i1}^t,\ldots ,r_{im}^t)$ for each bidder $i$ where $r_{ij}^{t}$ is the reserve price of item $j$ for bidder $i$ . Each bidder $i$ for $1\leq i\leq n$ has a (private) valuation $v_{i}^{t}:2^{[m]}\to \mathbb{R}^{+}$ over subsets of items. After the reserve prices have been chosen, every bidder $i$ picks a bid vector $b_{i}^{t}$ where $b_{ij}^{t}$ is the bid of bidder $i$ on item $j$ for $1\leq j\leq m$ . Then the auction for each item $1\leq j\leq m$ works as follows: (1) remove all bidders $i$ with $b_{ij}^{t} < r_{ij}^{t}$ ; (2) run the second price auction on the remaining bidders to determine the winner of item $j$ — the bidder with highest non-removed bid on item $j$ ; and (3) charge the winner of item $j$ the price which is the maximum of $r_{ij}^{t}$ and the second highest bid among non-removed bids $b_{ij}^{t}$ . The objective of the auctioneer is to achieve the total revenue approximately close to that achieved by the best fixed reserve-price auction. Note that in the bandit setting, the auction is given as a blackbox and at every time step, the auctioneer observes only the total revenue (total price) without knowing neither the bids of bidders nor the winner/price of each item. The setting enhances, among others, the privacy of bidders.
70
+
71
+ The second-price auctions with reserve prices in single-parameter environments have been considered by Roughgarden and Wang [29] in full-information online learning. Using the Follow-the-Perturbed-Leader strategy, they gave a polynomial-time online algorithm that achieves half the revenue of the best fixed reserve-price auction minus a term $O(\sqrt{T} \log T)$ (so their algorithm is $(1/2, O(\sqrt{T} \log T))$ -regret in our terminology). The problem we consider cannot be reduced to applying their algorithm on $m$ separated items since (1) bids on different items might be highly correlated (due to bidders' valuations); and (2) in the bandit setting for multiple items, the auctioneer knows only the total revenue (not the revenue from each item). Using our framework, we prove the following result.
72
+
73
+ Informal Theorem 3 There exist a bandit algorithm that achieves $(1/2, O(m\sqrt{nmT}\log T))$ -regret for revenue maximization in multi-parameter environments.
74
+
75
+ # 1.2.3 Bandit No-Envy Learning in Auctions
76
+
77
+ The concept of no-envy learning in auctions has been introduced by Daskalakis and Syrgkanis [10] in order to maintain approximate welfare optimality while guaranteeing computational tractability. The concept is inspired by the notion of Walrasian equilibrium. Intuitively, an allocation of items to buyers together with a price on each item forms a Walrasian equilibrium if no buyer envies other allocation given the current prices. In the paper, we consider no-envy bandit learning algorithms for the following online item selection problem.
78
+
79
+ In the problem, there are $m$ items and a player with monotone valuation $v: 2^{[m]} \to \mathbb{R}^+$ . At every time step $1 \leq t \leq T$ , the player chooses a subset of items $S^t \subset [m]$ and the adversary picks adaptively (probably depending on the history up to time $t - 1$ but not on the current set $S^t$ ) a threshold vector $\pmb{p}^t$ . The player observed the total price $\sum_{j \in S^t} p_j^t$ and gets the reward of $v(S^t) - \sum_{j \in S^t} p_j^t$ . A learning algorithm for the online item selection problem is a $r$ -approximate no-envy [10] if for any adaptively chosen sequence of threshold vectors $\pmb{p}^t$ for $1 \leq t \leq T$ , the sets $S^t$ for $1 \leq t \leq T$ chosen by the algorithm satisfy $\mathbb{E}\left[\sum_{t = 1}^{T} \left(v(S^t) - \sum_{j \in S^t} p_j^t\right)\right] \geq \max_{S \subseteq [m]} \sum_{t = 1}^{T} \left(r \cdot v(S) - \sum_{j \in S} p_j^t\right) - R(T)$ where the regret $R(T) = o(T)$ .
80
+
81
+ Daskalakis and Syrgkanis [10] considered the problem in the full-information setting (i.e., at every time step $t$ , the player observes the whole vector $\pmb{p}^t$ ) where the valuation $v$ is a coverage function<sup>1</sup> and gave an $(1 - 1/e)$ -approximate no-envy algorithm with regret bound $O(\sqrt{T})$ . The algorithm is designed via the convex rounding scheme [12], a technique which has been used in approximation algorithms and in truthful mechanism design. In this paper, we consider submodular valuations, a more general and widely-studied class of valuations. A valuation $v: 2^{[m]} \to \mathbb{R}^+$ is submodular if for any sets $S \subset T \subset [m]$ , and for every item $j$ , it holds that $v(S \cup j) - v(S) \geq v(T \cup j) - v(T)$ . Using our framework, we derive the following result.
82
+
83
+ Informal Theorem 4 There exist an $(1/2, O(m^{3/2}\sqrt{T}\log(mT)))$ -regret no-envy learning algorithm for the bandit item selection problem where the player valuation is submodular.
84
+
85
+ # 1.3 Related Work
86
+
87
+ There is large literature on online learning and auction design. In this section, we summarize and discuss only works directly related to ours. The interested reader can refer to [31, 17] for online learning and to [30] (and references therein) for auction design.
88
+
89
+ Online/Bandit Learning. Online learning, or online convex optimization, is an active research domain. The first no-regret algorithm was given by Hannan [16]. Subsequently, Littlestone and Warmuth [23] and Freund and Schapire [14] gave improved algorithms with regret $\sqrt{\log(|\mathcal{A}|)} o(T)$ where $|\mathcal{A}|$ is the size of the action space. Kalai and Vempala [20] presented the first efficient online algorithm, called Follow-the-Perturbed-Leader (FTPL), for linear objective functions. The strategy consists of adding perturbation to the cumulative gain (payoff) of each action and then selecting the action with the highest perturbed gain. This strategy has been generalized and successfully applied to several settings [18, 33, 10, 11]. Specifically, FTPL and its generalized versions have been used to
90
+
91
+ design efficient online no-regret algorithms with oracles beyond the linear setting: to submodular [18] and non-convex [2] settings.
92
+
93
+ In bandit learning, many interesting results and powerful optimization/algorithmic methods have been proved and introduced, including interior point methods [1], random walk [26], continuous multiplicative updates [9], random perturbation [3], iterative methods [13]. In bandit linear optimization, the near-optimal regret bound of $\widetilde{O}(n\sqrt{T})$ has been established due to a long line of works [1, 9, 6]. Beyond the linear functions, several results have been known. Kleinberg [22], Flaxman et al. [13] provided $\widetilde{O}(\mathrm{poly}(n)T^{3/4})$ -regret algorithm for general convex functions. Subsequently, Hazan and Li [19] presented an (exponential-time) algorithm which achieves $\widetilde{O}(\exp(n)\sqrt{T})$ -regret. Recently, Bubeck et al. [7] gave the first polynomial-time algorithm with regret $\widetilde{O}(n^{9.5}\sqrt{T})$ .
94
+
95
+ Smooth Auctions and Fictitious Play. The smoothness framework was introduced in order to prove approximation guarantees for equilibria in complete-information [27] and incomplete-information [32, 28] games. Smooth auctions (Definition 2) is a large class of auctions where the price of anarchy can be systematically characterized by the smooth arguments. Many interesting auctions have been shown to be smooth; and the smooth argument is a central proof technique to analyze the price of anarchy. We refer the reader to a recent survey [30] for more details. The smoothness framework extends to adaptive dynamics with vanishing regret. However, several important dynamics are not guaranteed to have the vanishing regret property, for example the class of fictitious play [5] and other classes of dynamics in [15]. A research agenda, as raised in [30], is to characterize the performance of such dynamics. Some recent works (e.g., [24]) have been considered in this direction.
96
+
97
+ Revenue Maximization. Optimal truthful auctions in single-parameter environments are completely characterized by Myerson [25]. Recently, a major line of research in data-driven mechanism design focuses on competitive auctions without the full knowledge on the valuation distribution and even in non-stochastic settings. The study of second-price auctions with reserve prices in single-parameter environments and its variants have been considered in [21, 4, 8]. Recently, Roughgarden and Wang [29] gave a polynomial-time online algorithm that achieves $(1/2, O(\sqrt{T}))$ -regret. Subsequently, Dudik et al. [11] showed that the same regret bound can be obtained using their framework. Both are in the online full-information setting.
98
+
99
+ No-envy Learning in Auctions. The notion of no-envy learning in auctions has been introduced by Daskalakis and Syrgkanis [10]. They proposed the concept of no-envy learning in order to maintain both the welfare optimality and computational tractability. Among others, Daskalakis and Syrgkanis [10] considered the online item selection problem with coverage valuation and gave an efficient $(1 - 1 / e)$ -approximate no-envy algorithm with regret bound of $O(\sqrt{T})$ .
100
+
101
+ # 1.4 Organization
102
+
103
+ Due to the space limit, we present only the revenue maximization (description in Section 1.2.2) as an application. We refer the reader to the supplementary for the full paper with all applications (and complete proofs).
104
+
105
+ # 2 Framework of Online Learning
106
+
107
+ We present a general efficient online algorithm and characterize its regret bound based on its concavity parameters. In Section 2.1, we prove the guarantee of the online mirror descent algorithm assuming access to unbiased estimates of the gradients of the functions. In Section 2.2, we derive an algorithm in the bandit setting.
108
+
109
+ # 2.1 Regret of $(\lambda ,\mu)$ -Concave Functions
110
+
111
+ Mirror descent. We are given a convex set $\mathcal{K}$ . Let $\Phi$ be a $\alpha_{\Phi}$ -strongly convex function w.r.t a norm $\| \cdot \|$ . (A function $\Phi : \mathbb{R}^n \to \mathbb{R}$ is $\alpha_{\Phi}$ -strongly convex w.r.t $\| \cdot \|$ if $\Phi(\pmb{x}') \geq \Phi(\pmb{x}) + \langle \nabla \Phi(\pmb{x}), \pmb{x}' - \pmb{x} \rangle + \frac{\alpha_{\Phi}}{2} \| \pmb{x}' - \pmb{x} \|^2$ .) Initially, let $\pmb{x}^1$ be an arbitrary point in $\mathcal{K}$ . At time step $t$ , play $\pmb{x}^t$ and receive the reward of $F^t(\pmb{x}^t)$ . Let $\pmb{g}^t$ be the unbiased estimate of $-\nabla F^t(\pmb{x}^t)$ revealed at time $t$ . The algorithm selects the
112
+
113
+ decision $\pmb{x}^{t + 1}$ using the standard mirror descent: $\pmb{x}^{t + 1} = \arg \max_{\pmb{x}\in \mathcal{K}}\bigl \{\langle \eta \pmb {g}^t,\pmb {x} - \pmb {x}^t\rangle -D_\Phi (\pmb {x}\| \pmb {x}^t)\bigr \}$ . where the Bregman divergence is defined as $D_{\Phi}(\pmb {x}\| \pmb{x}^{\prime})\coloneqq \Phi (\pmb {x}) - \Phi (\pmb{x}^{\prime}) - \langle \nabla \Phi (\pmb{x}^{\prime}),\pmb {x} - \pmb{x}^{\prime}\rangle$
114
+
115
+ Theorem 1 Assume that $F^t$ is $(\lambda, \mu)$ -concave for every $1 \leq t \leq T$ and $\pmb{x}^*$ is the best solution in hindsight, i.e., $\pmb{x}^* \in \arg \max_{\mathcal{K}} \sum_{t=1}^{T} F^t(\pmb{x})$ . Then the mirror descent algorithm achieves $\left( \frac{\lambda}{\mu}, R(T) \right)$ -regret in expectation where $R(T) = \frac{1}{\mu \cdot \eta} D_{\Phi}(\pmb{x}^* \| \pmb{x}^1) + \frac{\eta}{\mu \cdot 2\alpha_{\Phi}} \sum_{t=1}^{T} \| \pmb{g}^t \|_*^2$ . If $\| \pmb{g}^t \|_* \leq L_g$ for $1 \leq t \leq T$ (i.e., $F^t$ is $L_g$ -Lipschitz w.r.t $\| \cdot \|$ ) and $D_{\Phi}(\pmb{x}^* \| \pmb{x}^1)$ is bounded by $G^2$ then by choosing $\eta = \frac{G}{L_g} \sqrt{\frac{2\alpha_{\Phi}}{T}}$ , we have $R(T) \leq \frac{GL_g}{\mu} \sqrt{2\alpha_{\Phi} T}$ .
116
+
117
+ # 2.2 Bandit Algorithm
118
+
119
+ In this section, we consider the bandit setting in which at every time $t$ one can observe only the reward $f^{t}(\pmb{x}^{t})$ where $f^{t}$ is a bounded function defined on the convex set $\mathcal{K} = [0,1]^n$ . Without loss of generality, assume that $f^{t}:[0,1]^{n}\to [0,1]$ . In our algorithm, we will consider a discretization of $[0,1]^n$ and the multilinear relaxations of functions $f^{t}$ on these discrete points constructed as follows.
120
+
121
+ Discretization and Multilinear Extension. Let $f:[0,1]^n \to [0,1]$ be a function. Consider a lattice $\mathcal{L} = \{0,2^{-M},2\cdot 2^{-M},\ldots ,\ell \cdot 2^{-M},\ldots ,1\} ^n$ where $0\leq \ell \leq 2^{M}$ for some large parameter $M$ as a discretization of $[0,1]^n$ . $M$ is a constant parameter to be chosen later. Note that each $x_{i}\in \{0,2^{-M},2\cdot 2^{-M},\dots ,\ell \cdot 2^{-M},\dots ,1\}$ can be uniquely decomposed as $x_{i} = \sum_{j = 0}^{M}2^{-j}y_{ij}$ where $y_{ij}\in \{0,1\}$ . By this observation, we lift the set $[0,1]^n\cap \mathcal{L}$ to the $(n\times (M + 1))$ -dim space. Specifically, define a bijective lifting map $m:[0,1]^n\cap \mathcal{L}\to \{0,1\}^{n\times (M + 1)}$ such that each point $(x_{1},\ldots ,x_{n})\in \mathcal{K}\cap \mathcal{L}$ is mapped to the unique point $(y_{10},\ldots ,y_{1M},\ldots ,y_{n0},\ldots ,y_{nM})\in \{0,1\}^{n\times (M + 1)}$ where $x_{i} = \sum_{j = 0}^{M}2^{-j}y_{ij}$ . Define function $\tilde{f}:\{0,1\}^{n\times (M + 1)}\to [0,1]$ such that $\tilde{f} (\mathbf{1}_S)\coloneqq f(m^{-1}(\mathbf{1}_S))$ ; in other words, $\tilde{f} (\mathbf{1}_S) = f(\pmb {x})$ where $\pmb {x}\in [0,1]^n\cap \mathcal{L}$ and $\mathbf{1}_S = m(\pmb {x})$ . Note that $\mathbf{1}_S$ with $S\subset [n\times (M + 1)]$ is a $(n\times (M + 1))$ -dim vector with $(ij)^{th}$ -coordinate equal to 1 if $(i,j)\in S$ and equal to 0 otherwise. Consider a multilinear extension $F:[0,1]^{n\times (M + 1)}\to [0,1]$ of $\tilde{f}$ defined as follows.
122
+
123
+ $$
124
+ F (\boldsymbol {z}) := \sum_ {S \subset [ n \times (M + 1) ]} \tilde {f} (\mathbf {1} _ {S}) \prod_ {(i, j) \in S} z _ {i j} \prod_ {(i, j) \notin S} (1 - z _ {i j}).
125
+ $$
126
+
127
+ By the definition, $F(z)$ can be seen as $\mathbb{E}[\tilde{f}(\mathbf{1}_S)]$ where the $(ij)^{th}$ -coordinate of $\mathbf{1}_S$ equals 1 (i.e., $(\mathbf{1}_S)_{ij} = 1$ ) with probability $z_{ij}$ .
128
+
129
+ Algorithm description. Our algorithm, formally given in Algorithm 1, is inspired by algorithm SCRIBLE [1] which has been derived in the context of bandit linear optimization. It has been observed that the gradient estimates of the functions in SCRIBLE are unbiased only if those functions are linear; and that represents a main obstacle in order to derive an algorithm with optimal regret guarantee $R(T) = \widetilde{O}(\sqrt{T})$ . While aiming for the regret of $\widetilde{O}(\sqrt{T})$ , in our algorithm, we overcome this obstacle by considering at every step the gradient estimate of the multilinear extension of the reward function (construction above). That gradient estimate will be indeed proved to be unbiased. Incorporating that gradient estimator to the scheme in [1] and following our approach, we prove the regret guarantee of the algorithm. Note that in our algorithm, we do not need the information about the concavity parameters of the functions.
130
+
131
+ Theorem 2 Let $f^t: [0,1]^n \to [0,1]$ be the reward function at time $1 \leq t \leq T$ and let $F^t$ be the multilinear extension of the discretization of $f^t$ based on a lattice $\mathcal{L}$ (defined earlier). Assume that $F^t$ 's are $(\lambda, \mu)$ -concave and for every $x \in [0,1]^n$ , there exists $\overline{x} \in \mathcal{L}$ such that $|f^t(x) - f^t(\overline{x})| \leq L \cdot 2^{-M}$ for every $1 \leq t \leq T$ (for example, $f^t$ 's are $L$ -Lipschitz). Then, by choosing $M = \log T$ and $\eta = O\left(\frac{1}{(nM)^{3/2} \cdot \sqrt{T}}\right)$ and $\Phi$ as a $O(nM)$ -self-concordant function, Algorithm 1 (mirroir descent algorithm) achieves:
132
+
133
+ $$
134
+ \sum_ {t = 1} ^ {T} \mathbb {E} \big [ f ^ {t} (\pmb {x} ^ {t}) \big ] \geq \frac {\lambda}{\mu} \cdot \max _ {\pmb {x} \in [ 0, 1 ] ^ {n}} \sum_ {t = 1} ^ {T} f ^ {t} (\pmb {x}) - O \big (\max \{\lambda / \mu , 1 \} L n ^ {3 / 2} (\log T) ^ {3 / 2} (\log \log T) \sqrt {T} \big).
135
+ $$
136
+
137
+ # Algorithm 1 Algorithm in the bandit setting.
138
+
139
+ 1: Let $\Phi$ be a $\nu$ -self-concordant function over $[0,1]^{n\times (M + 1)}$ .
140
+ 2: Let $z^1 \in \mathrm{int}([0,1]^{n \times (M + 1)})$ such that $\nabla \Phi(z^1) = 0$ .
141
+ 3: for $t = 1$ to $T$ do
142
+ 4: Let $\mathbf{A}^t = \left[\nabla^2\Phi (\mathbf{z}^t)\right]^{-1 / 2}$ .
143
+ 5: Pick $\pmb{u}^t \in \mathfrak{S}_n$ uniformly random and set $\pmb{y}^t = \pmb{z}^t + \pmb{A}^t\pmb{u}^t$ .
144
+ 6: Round $\pmb{y}^t$ to a random point $\mathbf{1}_{S^t} \in \{0, 1\}^{n \times (M + 1)}$ such that element $(i,j)$ appears in $S^t$ with probability $y_{ij}^t$ .
145
+ 7: Play $\pmb{x}^t = m^{-1}(\mathbf{1}_{S^t})$ and receive the reward of $f^{t}(\pmb{x}^{t})$ .
146
+ 8: Let $\pmb{g}^t = -n(M + 1)f^t(\pmb{x}^t)(\pmb{A}^t)^{-1}\pmb{u}^t$ and compute the solution $\pmb{z}^{t + 1} \in [0,1]^{n \times (M + 1)}$ by applying the mirror descent framework on $F^t$ (Section 2.1). Specifically,
147
+
148
+ $$
149
+ \boldsymbol {z} ^ {t + 1} = \arg \max _ {\boldsymbol {z} \in [ 0, 1 ] ^ {n \times (M + 1)}} \left\{\langle \eta \boldsymbol {g} ^ {t}, \boldsymbol {z} - \boldsymbol {z} ^ {t} \rangle - D _ {\Phi} (\boldsymbol {z} \| \boldsymbol {z} ^ {t}) \right\}.
150
+ $$
151
+
152
+ # 3 Online Simultaneous Second-Price Auctions with Reserve Prices
153
+
154
+ In this section, we consider the online simultaneous second-price auctions with reserve prices (definition in Section 1.2.2). We denote the revenue of selling item $j$ as $\mathrm{REV}_j(\boldsymbol{r}^t, \boldsymbol{b}^t)$ where $\boldsymbol{b}^t = (b_1^t, \ldots, b_n^t)$ and $\boldsymbol{r}^t = (r_1^t, \ldots, r_n^t)$ . The revenue of the auctioneer at time step $t$ is $\mathrm{REV}(\boldsymbol{r}^t, \boldsymbol{b}^t) = \sum_{j=1}^{m} \mathrm{REV}_j(\boldsymbol{r}^t, \boldsymbol{b}^t)$ . The goal of the auctioneer is to achieve the total revenue $\sum_{t=1}^{T} \mathrm{REV}(\boldsymbol{r}^t, \boldsymbol{b}^t)$ approximately close to that achieved by the best fixed reserve-price $\max_{\boldsymbol{r}^*} \sum_{t=1}^{T} \mathrm{REV}(\boldsymbol{r}^*, \boldsymbol{b}^t)$ .
155
+
156
+ In the setting, by scaling, assume that all bids and reserve prices are in $\mathcal{K} = [0,1]^{n\times m}$ . Consider the lattice $\mathcal{L} = \{\ell \cdot 2^{-M}:0\leq \ell \leq 2^M\}^{n\times m}\subset [0,1]^{n\times m}$ for some large parameter $M$ as a discretization of $[0,1]^{n\times m}$ . Observe that for any reserve price vector $\pmb{r}$ , $|\mathrm{REV}(\pmb {r},\pmb {b}) - \mathrm{REV}(\overline{\pmb{r}},\pmb {b})|\leq m\cdot 2^{-M}$ where $\overline{\pmb{r}}$ is a reserve price vector such that $\overline{r}_{ij}$ is the largest multiple of $2^{-M}$ smaller than $r_{ij}$ for every $i,j$ (for some large enough $M$ ). Therefore, one can approximate the revenue up to any arbitrary precision by restricting the reserve price on $\mathcal{L}$ . We slightly abuse notation by denoting $\mathrm{REV}_j(\mathbf{1}_S,\pmb {b})$ as $\mathrm{REV}_j(\pmb {r},\pmb {b})$ where $\mathbf{1}_S = m(\pmb {r})$ (recall that $m$ is the map defined in Section 2.2). Following Section 2.2, given a bid vector $\pmb{b}$ , the multilinear extension $\overline{\mathrm{REV}}$ of the revenue REV is defined as $\overline{\mathrm{REV}} (\cdot ,\pmb {b}):[0,1]^{n\times m\times (M + 1)}\to \mathbb{R}$ such that:
157
+
158
+ $$
159
+ \overline {{\operatorname {R E V}}} (\boldsymbol {z}, \boldsymbol {b}) = \sum_ {S \subset [ n \times m \times (M + 1) ]} \left(\sum_ {j = 1} ^ {m} \operatorname {R E V} _ {j} (\mathbf {1} _ {S}, \boldsymbol {b})\right) \prod_ {(i, j, k) \in S} z _ {i j k} \prod_ {(i, j, k) \notin S} (1 - z _ {i j k}).
160
+ $$
161
+
162
+ Online bandit Reserve-Price Algorithm. Initially, let $r^1$ be an arbitrary feasible reserve-price. At each time step $t \geq 1$ ,
163
+
164
+ (i) select $r^t$ or 0 each with probability $1/2$ as the reserve-price;
165
+ (ii) receive the revenue corresponding to the selected reserve-price;
166
+ (iii) compute $\boldsymbol{r}^{t+1}$ using Algorithm 1 with the following specification: in line 8 of Algorithm 1, replace $f^t(\boldsymbol{x}^t)$ by $2\mathrm{REV}(\boldsymbol{r}^t, \boldsymbol{b}^t)$ if the selected reserve-price is $\boldsymbol{r}^t$ , or replace $f^t(\boldsymbol{x}^t)$ by 0 if the selected reserve-price is 0. (By doing that, the expected value of $\boldsymbol{g}^t$ in Algorithm 1 is $-\nabla \overline{\mathrm{REV}}(\boldsymbol{r}^t, \boldsymbol{b}^t)$ .)
167
+
168
+ Analysis. In order to analyze the performance of this algorithm, we study the properties of some related functions and then derive the regret bound for the algorithm. Fix a bid vector \( \pmb{b} \). Let \( \pmb{r}_j \) be a vector consisting of reserve prices on item \( j \), i.e., \( \pmb{r}_j = (r_{1j},\dots,r_{nj}) \). (Recall that \( r_{ij} \) is the reserve price for bidder \( i \) on item \( j \).) As \( \pmb{b} \) is fixed and the selling procedure of each item depends only on the reserve prices to the item, so for simplicity denote \( \mathrm{REV}_j(\pmb{r},\pmb{b}) \) as \( \mathrm{REV}_j(\pmb{r}_j) \) and \( \mathrm{REV}(\pmb{r},\pmb{b}) \) as \( \mathrm{REV}(\pmb{r}) \). Define a function \( h_j:\{0,1\}^{n\times (M + 1)}\to \mathbb{R} \) such that \( h_j(\mathbf{1}_T) = \max \{\mathrm{REV}_j(\mathbf{1}_T),\mathrm{REV}_j(\mathbf{1}_{\emptyset})\} = \max \{\mathrm{REV}_j(\pmb{r}),\mathrm{REV}_j(\mathbf{0})\} \) where \( \pmb{r}_j \) is the reserve price corresponding to \( \mathbf{1}_T \) for \( T\subset [n\times (M + 1)] \). Let \( H_j:[0,1]^{n\times (M + 1)}\to \mathbb{R} \) be the multilinear extension of \( h_j \). Moreover, define \( H:
169
+
170
+ $$
171
+ \begin{array}{l} [ 0, 1 ] ^ {n \times m \times (M + 1)} \to \mathbb {R} \text {a s} \max \{\operatorname {R e v} (\boldsymbol {r}), \operatorname {R e v} (\boldsymbol {0}) \} \text {d e f i n e d a s} H (\boldsymbol {z}) = \\ \sum_ {S \subset [ n \times m \times (M + 1) ]} \max \{\operatorname {R e v} (\mathbf {1} _ {S}), \operatorname {R e v} (\mathbf {1} _ {\emptyset}) \} \prod_ {(i, j, k) \in S} z _ {i j k} \prod_ {(i, j, k) \notin S} (1 - z _ {i j k}). \end{array}
172
+ $$
173
+
174
+ Lemma 1 It holds that $H(z) = \sum_{j=1}^{m} H_{j}(z_{j})$ where $z_{j}$ is the restriction of $z$ to the coordinate related to item $j$ .
175
+
176
+ Lemma 2 Function $H_{j}$ is $(1,1)$ -concave.
177
+
178
+ Proof We prove that the condition (1) of the $(1,1)$ -concavity holds for all points in the lattice. As the multilinear extension can be seen as the expectation over these points, the lemma will follow. Fix a bid profile $\boldsymbol{b}_j = (b_{1j},\dots,b_{nj})$ . Without loss of generality, assume that $b_{1j} \geq b_{2j} \geq \ldots \geq b_{nj}$ . Let $\boldsymbol{r}_j$ and $\boldsymbol{r}_j^*$ be two arbitrary reserve price vectors. We will show that
179
+
180
+ $$
181
+ \begin{array}{l} \sum_ {i = 1} ^ {n} \left[ \max \left\{\operatorname {R e v} _ {j} \left(\boldsymbol {r} _ {- i, j}, r _ {i j} ^ {*}\right), \operatorname {R e v} _ {j} (\boldsymbol {0}) \right\} - \max \left\{\operatorname {R e v} _ {j} \left(\boldsymbol {r} _ {j}\right), \operatorname {R e v} _ {j} (\boldsymbol {0}) \right\} \right] \\ \geq \max \left\{\operatorname {R e v} _ {j} \left(\boldsymbol {r} _ {j} ^ {*}\right), \operatorname {R e v} _ {j} (\boldsymbol {0}) \right\} - \max \left\{\operatorname {R e v} _ {j} \left(\boldsymbol {r} _ {j}\right), \operatorname {R e v} _ {j} (\boldsymbol {0}) \right\} \tag {2} \\ \end{array}
182
+ $$
183
+
184
+ where $r_{-ij}$ stands for the reserve price vectors on item $j$ without the reserve price of bidder $i$ .
185
+
186
+ Observe that the revenue $\max \{\mathrm{REV}_j(\pmb{r}_j'),\mathrm{REV}_j(\pmb {0})\}$ for every reserve price $\pmb{r}_j^\prime$ is at least the second highest bid $b_{2j}$ (that is obtained in $\mathrm{REV}_j(\mathbf{0}))$ . Moreover, for any reserve price $\pmb{r}_j^\prime$ such that the auctioneer either (1) removes the first bidder (with highest bid) or (2) removes the second bidder and $r_{1j}^{\prime}\leq b_{2j}$ , the revenue $\max \{\mathrm{REV}_j(\pmb {r}_j'),\mathrm{REV}_j(\pmb {0})\} = \mathrm{REV}_j(\pmb {0})$ . Hence, $\max \{\mathrm{REV}_j(\pmb {r}_j'),\mathrm{REV}_j(\pmb {0})\} \neq \mathrm{REV}_j(\pmb {0})$ if and only if $b_{2j} < r_{1j}^{\prime}\leq b_{1j}$
187
+
188
+ By these observations, we deduce that $\max \{\mathrm{REV}_j(r_{-ij},r_{ij}^*),\mathrm{REV}_j(\mathbf{0})\} \neq \max \{\mathrm{REV}_j(r_j),\mathrm{REV}_j(\mathbf{0})\}$ if and only if $i = 1$ and either $\{b_{2j}\leq r_{1j}\neq r_{1j}^{*}\leq b_{1j}\}$ ; or $\{r_{1j}^{*}\in (b_{2j},b_{1j}]$ and $r_{1j}\notin (b_{2j},b_{1j}]\}$ ; or inversely $\{r_{1j}\in (b_{2j},b_{1j}]$ and $r_{1j}^{*}\notin (b_{2j},b_{1j}]\}$ .
189
+
190
+ Thus, proving Inequality (2) is equivalent to showing that
191
+
192
+ $$
193
+ \begin{array}{l} \max \left\{\operatorname {R e v} _ {j} \left(\boldsymbol {r} _ {- 1 j}, r _ {1 j} ^ {*}\right), \operatorname {R e v} _ {j} (\boldsymbol {0}) \right\} - \max \left\{\operatorname {R e v} _ {j} \left(\boldsymbol {r} _ {j}\right), \operatorname {R e v} _ {j} (\boldsymbol {0}) \right\} \\ \geq \max \left\{\operatorname {R E V} _ {j} \left(\boldsymbol {r} _ {j} ^ {*}\right), \operatorname {R E V} _ {j} (\boldsymbol {0}) \right\} - \max \left\{\operatorname {R E V} _ {j} \left(\boldsymbol {r} _ {j}\right), \operatorname {R E V} _ {j} (\boldsymbol {0}) \right\}. \\ \end{array}
194
+ $$
195
+
196
+ Case 1: $b_{2j} \leq r_{1j} \neq r_{1j}^{*} \leq b_{1j}$ . In this case, both sides are equal to $r_{1j}^{*} - r_{1j}$ .
197
+
198
+ Case 2: $r_{1j}^{*}\in (b_{2j},b_{1j}]$ and $r_{1j}\notin (b_{2j},b_{1j}]$ . In this case, both sides are equal to $r_{1j}^{*} - b_{2j}$ .
199
+
200
+ Case 3: $r_{1j} \in (b_{2j}, b_{1j}]$ and $r_{1j}^* \notin (b_{2j}, b_{1j}]$ . In this case, both sides are equal to $b_{2j} - r_{1j}$ .
201
+
202
+ Case 4: the complementary of all previous cases. In this case, both sides are equal to 0.
203
+
204
+ Therefore, Inequality (2) holds and so the lemma follows.
205
+
206
+ Consider an imaginary algorithm which is similar to our online reserve price algorithm but at every step $t$ , its gain on item $j$ is $\max \{\mathrm{REV}_j(\boldsymbol{r}^t), \mathrm{REV}_j(\mathbf{0})\}$ . Observe that the online reserve price algorithm selects at every step $t$ either $\boldsymbol{r}^t$ or $\mathbf{0}$ with probability $1/2$ , the revenue of the algorithm is at least half that of the imaginary algorithm. Hence, by Theorem 2 and the (1,1)-concavity of $H$ (by Lemmas 1 and 2), we deduce the following theorem.
207
+
208
+ Theorem 3 The online bandit reserve price algorithm achieves $(1/2, O(m\sqrt{nm}\log T\sqrt{T}))$ -regret.
209
+
210
+ # 4 Conclusion
211
+
212
+ In this paper, we have introduced a framework to design efficient online learning algorithms. Apart of standard regularity requirements (such as compact convex domain, Lipschitz, etc), a new crucial property is the $(\lambda, \mu)$ -concavity. Designing efficient online learning algorithms is now reduced to determining the concave parameters of reward functions. We show the applicability of the framework through applications in auction design. Due to the simplicity of the new notion of concavity, we hope that our approach would be useful in designing efficient online algorithms with approximate regret bounds for different problems.
213
+
214
+ # Broader Impact
215
+
216
+ As for the ethical and future societal direct consequences, this is not relevant in the context of this paper.
217
+
218
+ # Acknowledgments and Disclosure of Funding
219
+
220
+ This work is supported by the ANR project OATA $\mathfrak{n}^{\mathrm{o}}$ ANR-15-CE40-0015-01.
221
+
222
+ # References
223
+
224
+ [1] Jacob D. Abernethy, Elad Hazan, and Alexander Rakhlin. Competing in the dark: An efficient algorithm for bandit linear optimization. In Proc. 21st Annual Conference on Learning Theory (COLT), pages 263-274, 2008.
225
+ [2] Naman Agarwal, Alon Gonen, and Elad Hazan. Learning in non-convex games with an optimization oracle. In Proc. 32nd Conference on Learning Theory, volume 99, pages 18–29, 2019.
226
+ [3] Baruch Awerbuch and Robert Kleinberg. Online linear optimization and adaptive routing. Journal of Computer and System Sciences, 74(1):97-114, 2008.
227
+ [4] Avrim Blum and Jason D Hartline. Near-optimal online auctions. In Proc. 16th Symposium on Discrete Algorithms, pages 1156-1163, 2005.
228
+ [5] George W Brown. Iterative solution of games by fictitious play. Activity analysis of production and allocation, 13(1):374-376, 1951.
229
+ [6] Sébastien Bubeck, Nicolo Cesa-Bianchi, and Sham Kakade. Towards minimax policies for online linear optimization with bandit feedback. In Annual Conference on Learning Theory, volume 23, pages 41-1, 2012.
230
+ [7] Sébastien Bubeck, Yin Tat Lee, and Ronen Eldan. Kernel-based methods for bandit convex optimization. In Proc. 49th Symposium on Theory of Computing, pages 72-85, 2017.
231
+ [8] Nicolo Cesa-Bianchi, Claudio Gentile, and Yishay Mansour. Regret minimization for reserve prices in second-price auctions. IEEE Transactions on Information Theory, 61(1):549-564, 2015.
232
+ [9] Varsha Dani, Sham M Kakade, and Thomas P Hayes. The price of bandit information for online optimization. In Advances in Neural Information Processing Systems, pages 345-352, 2008.
233
+ [10] Constantinos Daskalakis and Vasilis Syrgkanis. Learning in auctions: Regret is hard, envy is easy. In 57th Annual Symposium on Foundations of Computer Science, pages 219-228, 2016.
234
+ [11] Miroslav Dudik, Nika Haghtalab, Haipeng Luo, Robert E Schapire, Vasilis Syrgkanis, and Jennifer Wortman Vaughan. Oracle-efficient online learning and auction design. In Proc. 58th Symposium on Foundations of Computer Science (FOCS), pages 528-539, 2017.
235
+ [12] Shaddin Dughmi, Tim Roughgarden, and Qiqi Yan. From convex optimization to randomized mechanisms: toward optimal combinatorial auctions. In Proc. 43rd ACM Symposium on Theory of Computing, pages 149-158, 2011.
236
+ [13] Abraham D Flaxman, Adam Tauman Kalai, and H Brendan McMahan. Online convex optimization in the bandit setting: gradient descent without a gradient. In Proc. 16th Symposium on Discrete Algorithms, pages 385-394, 2005.
237
+ [14] Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences, 55(1):119-139, 1997.
238
+ [15] Drew Fudenberg and David K Levine. The theory of learning in games. MIT press, 1998.
239
+
240
+ [16] James Hannan. Approximation to bayes risk in repeated play. Contributions to the Theory of Games, 3:97-139, 1957.
241
+ [17] Elad Hazan. Introduction to online convex optimization. Foundations and Trends in Optimization, 2(3-4):157-325, 2016.
242
+ [18] Elad Hazan and Satyen Kale. Online submodular minimization. Journal of Machine Learning Research, 13:2903-2922, 2012.
243
+ [19] Elad Hazan and Yuanzhi Li. An optimal algorithm for bandit convex optimization. arXiv preprint arXiv:1603.04350, 2016.
244
+ [20] Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291-307, 2005.
245
+ [21] Robert Kleinberg and Tom Leighton. The value of knowing a demand curve: Bounds on regret for online posted-price auctions. In Proc. 44th Symposium on Foundations of Computer Science, pages 594-605, 2003.
246
+ [22] Robert D Kleinberg. Nearly tight bounds for the continuum-armed bandit problem. In Advances in Neural Information Processing Systems, pages 697–704, 2005.
247
+ [23] Nick Littlestone and Manfred K Warmuth. The weighted majority algorithm. Information and computation, 108(2):212-261, 1994.
248
+ [24] Thodoris Lykouris, Vasilis Syrgkanis, and Éva Tardos. Learning and efficiency in games with dynamic population. In Proc. 27th Symposium on Discrete algorithms, pages 120-129, 2016.
249
+ [25] Roger B Myerson. Optimal auction design. Mathematics of Operations Research, 6(1):58-73, 1981.
250
+ [26] Hariharan Narayanan and Alexander Rakhlin. Random walk approach to regret minimization. In Advances in Neural Information Processing Systems, pages 1777-1785, 2010.
251
+ [27] Tim Roughgarden. Intrinsic robustness of the price of anarchy. Journal of the ACM (JACM), 62 (5):32, 2015.
252
+ [28] Tim Roughgarden. The price of anarchy in games of incomplete information. ACM Transactions on Economics and Computation, 3(1):6, 2015.
253
+ [29] Tim Roughgarden and Joshua R Wang. Minimizing regret with multiple reserves. In Proc. 2016 ACM Conference on Economics and Computation, pages 601-616, 2016.
254
+ [30] Tim Roughgarden, Vasilis Syrgkanis, and Eva Tardos. The price of anarchy in auctions. Journal of Artificial Intelligence Research, 59:59–101, 2017.
255
+ [31] Shai Shalev-Shwartz et al. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107-194, 2012.
256
+ [32] Vasilis Syrgkanis and Eva Tardos. Composable and efficient mechanisms. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 211-220. ACM, 2013.
257
+ [33] Vasilis Syrgkanis, Akshay Krishnamurthy, and Robert Schapire. Efficient algorithms for adversarial contextual learning. In International Conference on Machine Learning, pages 2159-2168, 2016.
abanditlearningalgorithmandapplicationstoauctiondesign/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1dc9541a94806c332dda8d5e36ad12fef99a6e20f8afe3ea95c580ee63a871fb
3
+ size 108975
abanditlearningalgorithmandapplicationstoauctiondesign/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9fc2c92c4351180f6bafed0a24f2f9c21ce28e02e78a9ae625869e2be264208d
3
+ size 564484
abayesiannonparametricsviewintodeeprepresentations/d8a34f6c-1cc1-4bcb-87ee-43a49156d29d_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b5acd56bc8074debf6aeee233d5e101a9f3a0d239ca7924cf9f6bb462d59705
3
+ size 73870
abayesiannonparametricsviewintodeeprepresentations/d8a34f6c-1cc1-4bcb-87ee-43a49156d29d_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5bf3ca67be88fd9c65b36b0acb9e1266f06f4436b7577735c47762c3c426197d
3
+ size 85755
abayesiannonparametricsviewintodeeprepresentations/d8a34f6c-1cc1-4bcb-87ee-43a49156d29d_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bdb768b3e2a15193f89cedb79ed9c4c94c1ade7437a1cd28cc4fdc00e0cf504f
3
+ size 2016124
abayesiannonparametricsviewintodeeprepresentations/full.md ADDED
@@ -0,0 +1,284 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Bayesian Nonparametrics View into Deep Representations
2
+
3
+ Michał Jamroz*
4
+
5
+ AGH University of Science and Technology
6
+
7
+ Krakow, Poland
8
+
9
+ mijamroz@agh.edu.pl
10
+
11
+ Marcin Kurdziel*
12
+
13
+ AGH University of Science and Technology
14
+
15
+ Krakow, Poland
16
+
17
+ kurdziel@agh.edu.pl
18
+
19
+ Mateusz Opala
20
+
21
+ AGH University of Science and Technology
22
+
23
+ Krakow, Poland
24
+
25
+ mo@matthewopala.com
26
+
27
+ # Abstract
28
+
29
+ We investigate neural network representations from a probabilistic perspective. Specifically, we leverage Bayesian nonparametrics to construct models of neural activations in Convolutional Neural Networks (CNNs) and latent representations in Variational Autoencoders (VAEs). This allows us to formulate a tractable complexity measure for distributions of neural activations and to explore global structure of latent spaces learned by VAEs. We use this machinery to uncover how memorization and two common forms of regularization, i.e. dropout and input augmentation, influence representational complexity in CNNs. We demonstrate that networks that can exploit patterns in data learn vastly less complex representations than networks forced to memorize. We also show marked differences between effects of input augmentation and dropout, with the latter strongly depending on network width. Next, we investigate latent representations learned by standard $\beta$ -VAEs and Maximum Mean Discrepancy (MMD) $\beta$ -VAEs. We show that aggregated posterior in standard VAEs quickly collapses to the diagonal prior when regularization strength increases. MMD-VAEs, on the other hand, learn more complex posterior distributions, even with strong regularization. While this gives a richer sample space, MMD-VAEs do not exhibit independence of latent dimensions. Finally, we leverage our probabilistic models as an effective sampling strategy for latent codes, improving quality of samples in VAEs with rich posteriors.
30
+
31
+ # 1 Introduction
32
+
33
+ Neural networks that differ only in initial parameter values converge to different minima of the cost function. This observation raises a following question: is this variability simply a manifestation of a numerical leeway afforded by model overparametrization or, perhaps, a manifestation of a more fundamental discord in ways neural networks take to make predictions? This question is not only important from a practical perspective – e.g. in efforts to pinpoint and interpret factors behind specific network responses – but is also fundamental to our understanding of information processing in neural models. Recently, Raghu et al. [2017] Morcos et al. [2018] and Kornblith et al. [2019] showed that under suitable similarity metric neural representations do in fact share some common structure. Yet, their work is limited to finding representational similarity between pairs of converged networks.
34
+
35
+ In this article we aim to go beyond pairwise similarities and characterize neural representations from a probabilistic perspective. Specifically, we focus on two goals: characterizing sets of representations that are effectively reachable by convolutional networks and uncovering structure in latent spaces learned by variational autoencoders. To construct such characterizations we adopt Dirichlet Process Gaussian Mixture Models (DP-GMMs) as density models for deep representations. We then leverage tractable quantities in DP-GMMs to compare neural models with respect to the sets of representations they learn. Our main contributions are: (1) we propose probabilistic models for neural representations and use them to characterize sets of learned representations, (2) we show that memorizing nets learn vastly more complex representations than network trained on real data, (3) we demonstrate markedly different effects of two common forms of regularization on the complexity of learned representations and (4) we characterize latent spaces learned by $\beta$ -VAEs and MMD-VAEs, demonstrating marked differences in representational capacity of their aggregated posteriors.
36
+
37
+ # 2 Dirichlet Process Mixture Model for neural representations
38
+
39
+ Our main idea in this work is to investigate neural representations using nonparametric mixture models. These flexible density models naturally adapt to the complexity of the underlying data distribution. We therefore leverage them as a principled way to quantify and compare complexity of representations learned by neural networks and to investigate latent representations in generative models. The specific nonparametric model we decided to use in this work, namely DP-GMM, was chosen because certain quantities of interest to us - e.g. when studying independence of dimensions in latent codes - are tractable in this model. Furthermore, it is consistent in total variation for distributions that are in the KL support of the prior and - assuming that the approximated density is sufficiently smooth - has near minimax contraction rate [Ghosal and Van der Vaart, 2017, sections 7.2 and 9.4].
40
+
41
+ We use DP-GMM to model representations learned by kernels in convolutional neural networks and to capture distributions of latent codes in variational autoencoders. In the latter case we take a learned inference distribution $q_{\phi}(z \mid x)$ and construct a model for the aggregated posterior:
42
+
43
+ $$
44
+ q _ {\phi} (\boldsymbol {z}) = \int_ {\boldsymbol {x}} q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x}) p (\boldsymbol {x}) d \boldsymbol {x}. \tag {1}
45
+ $$
46
+
47
+ Therefore, the set of observations in DP-GMM is simply the set of latent codes inferred for test images.
48
+
49
+ When modelling representations learned by CNN kernels, or neurons' representations, we use a construction similar to the one employed in Raghu et al. [2017], Morcos et al. [2018]. Consider a single convolutional kernel $k$ in a CNN layer. To construct a representation of the feature learned by $k$ , we take a fixed sequence of input images $[\mathbf{x}_1, \mathbf{x}_2, \dots, \mathbf{x}_l]$ and calculate a sequence of kernel responses: $[k(\mathbf{x}_1), k(\mathbf{x}_2), \dots, k(\mathbf{x}_l)]$ . These responses form a volume with shape $l \times h \times w$ , where $h$ and $w$ are height and width of the layer output, respectively. We then perform an average pooling across spatial dimensions, obtaining an $l \times 1$ vector $\mathbf{a}_k$ that can be interpreted as a fingerprint of the feature learned by $k$ , i.e., a neuron's representation:
50
+
51
+ $$
52
+ \mathbf {a} _ {k} = \left[ \operatorname {a v g} _ {-} \operatorname {p o o l} \left(k \left(\mathbf {x} _ {1}\right)\right), \operatorname {a v g} _ {-} \operatorname {p o o l} \left(k \left(\mathbf {x} _ {2}\right)\right), \dots , \operatorname {a v g} _ {-} \operatorname {p o o l} \left(k \left(\mathbf {x} _ {l}\right)\right) \right]. \tag {2}
53
+ $$
54
+
55
+ We repeat this procedure for every kernel in the given layer, using same input sequence in each case. However, unlike recent works on similarity of neural representations [Raghu et al., 2017, Morcos et al., 2018, Kornblith et al., 2019], we do not seek to find a transformation between two sets of neuron representations (i.e. between a pair of conv layers) that maximizes their similarity. We instead treat each learned representation $\mathbf{a}_k$ as a realization of a random variable that follows the distribution of representations learned in the given network layer. Under this interpretation we train multiple networks with identical architectures and hyper-parameter values, but different random initializations. Finally, we pool together representations learned by these networks<sup>1</sup>. Given $n$ trained networks and a convolutional layer with $m$ kernels, the set of DP-GMM observations therefore consist of $n \cdot m$ representations: $\{\mathbf{a}_1, \mathbf{a}_2, \dots, \mathbf{a}_{n \cdot m}\}$ .
56
+
57
+ There are two important aspects to our setup for convolutional networks. On the technical side, it is invariant with respect to the ordering of kernels in convolutional layers - any information about
58
+
59
+ initial ordering of kernels in a conv layer is lost in the set of observations modelled by DP-GMM. More importantly, this setup does not attempt to model a set of representations learned by a specific network instance. Rather, we want to capture the distribution of representations that are effectively reachable by a given layer in a certain network architecture and under certain training regime. This can be seen as capturing a restricted form of the notion of effective capacity formalized in Arpit et al. [2017]. That is, we can compare different networks and training regimes with respect to the sets of representations that are effectively learned under stochastic gradient descent.
60
+
61
+ In the following sections we outline the DP-GMM formulation used in this work and explain how we employ it to quantify representational complexity in CNNs and investigate latent spaces in VAEs.
62
+
63
+ # 2.1 Generative model
64
+
65
+ Let $\mathcal{D} = \{\pmb{x}_1, \pmb{x}_2, \dots, \pmb{x}_N\}$ be a dataset of $N$ samples from some unknown $D$ -dimensional probability distribution. To construct a density model for this distribution, we postulate a following generative model for $\pmb{x}$ :
66
+
67
+ $$
68
+ \alpha \sim G a m m a (1, 1),
69
+ $$
70
+
71
+ $$
72
+ G \mid \alpha \sim D P (N I W (\boldsymbol {\theta} _ {0}), \alpha), \tag {3}
73
+ $$
74
+
75
+ $$
76
+ \boldsymbol {\mu} _ {k}, \boldsymbol {\Sigma} _ {k} \sim G,
77
+ $$
78
+
79
+ $$
80
+ \boldsymbol {x} \mid \boldsymbol {\mu} _ {k}, \boldsymbol {\Sigma} _ {k} \sim \mathcal {N} (\boldsymbol {\mu} _ {k}, \boldsymbol {\Sigma} _ {k}).
81
+ $$
82
+
83
+ Shortly, observations are assumed to come from a mixture of Gaussian components. Component parameters have a Dirichlet Process prior with concentration $\alpha$ (also uncertain, i.e. a model parameter with $G\text{ma}(1,1)$ prior). $G$ in this formulation stands for a random measure over components and their parameters. The base distribution in the Dirichlet Process, i.e. prior over the component mean $\pmb{\mu}_{k}$ and covariance $\pmb{\Sigma}_{k}$ , is chosen to be a Normal-inverse-Wishart (NIW) distribution with hyper-parameters $\theta_0$ :
84
+
85
+ $$
86
+ p \left(\boldsymbol {\mu} _ {k}, \boldsymbol {\Sigma} _ {k}\right) = N I W \left(\boldsymbol {\mu} _ {k}, \boldsymbol {\Sigma} _ {k} \mid \boldsymbol {\theta} _ {0}\right), \quad \boldsymbol {\theta} _ {0} = \left\{\mathbf {m} _ {0}, \nu_ {0}, \kappa_ {0}, \boldsymbol {S} _ {0} \right\}. \tag {4}
87
+ $$
88
+
89
+ We explain the choice of these hyper-parameters in Appendix A.
90
+
91
+ We use the Chinese Restaurant Process (CRP) as a constructive definition of the Dirichlet Process prior. Shortly, CRP describes a process of either assigning an observation to an existing component or creating a new component for it. In particular, let $c_{i} \in \mathbf{c} = \{c_{1}, c_{2}, \ldots, c_{N}\}$ be a component for the observation $\mathbf{x}_{i}$ and assume that vector of component assignments for other observations, denoted by $c_{-i} = c \setminus \{c_{i}\}$ , is known. Then the probability $c_{i} = k \mid c_{-i}$ under CRP is given by:
92
+
93
+ $$
94
+ p (c _ {i} = k \mid \boldsymbol {c} _ {- i}, \alpha) = \left\{ \begin{array}{l l} \frac {N _ {k , - i}}{\alpha + N - 1} & \text {i f c o m p o n e n t k e x i s t s}, \\ \frac {\alpha}{\alpha + N - 1} & \text {i f k i s a n e w c o m p o n e n t}, \end{array} \right.
95
+ $$
96
+
97
+ where $N_{k, -i}$ is the number of observations already assigned to the $k$ -th component. This mechanism effectively puts a prior on the number of mixture components, making it a model parameter. The choice of NIW prior over component parameters is also significant. NIW is a conjugate prior to the multivariate Normal likelihood, which greatly simplifies the model.
98
+
99
+ We employ Collapsed Gibbs Sampling (CGS) [Neal, 2000] to estimate posterior over DP-GMM parameters given $\mathcal{D}$ . CGS samples from the posterior by iteratively assigning observations to components. That is, given an observation $\pmb{x}_i$ , CGS samples a component $c_{i}$ from $p(c_{i} \mid c_{-i}, \pmb{x}_{i}, \alpha, \pmb{\theta})$ where $\pmb{\theta}$ are the parameters of the NIW posterior distributions over means and covariances. However, thanks to the conjugate prior on the component parameters, $p(c_{i} \mid c_{-i}, \pmb{x}_{i}, \alpha, \pmb{\theta})$ does not depend on $\mu_{k}$ and $\Sigma_{k}$ , as they can be marginalized out. This marginalization greatly reduces sampling variance [Liu et al., 1994]. That said, parameters for a given component can be easily recovered by sampling from the NIW posterior (see Appendix A for details):
100
+
101
+ $$
102
+ p \left(\boldsymbol {\mu} _ {k}, \boldsymbol {\Sigma} _ {k} \mid \mathcal {D}, \boldsymbol {c}\right) = N I W \left(\boldsymbol {\mu} _ {k}, \boldsymbol {\Sigma} _ {k} \mid \boldsymbol {\theta} _ {k}\right), \quad \boldsymbol {\theta} _ {k} = \left\{\mathbf {m} _ {k}, v _ {k}, \kappa_ {k}, \boldsymbol {S} _ {k} \right\}. \tag {5}
103
+ $$
104
+
105
+ An outcome of one CGS iteration is an assignment of observations to components. Collectively these iterations form a Markov chain that approximates the posterior distribution over DP-GMM parameters. In turn, this posterior induces a posterior predictive distribution for previously unseen observations: $p(\boldsymbol{x}^* \mid \mathcal{D})$ , which can be seen as the model's view of the underlying data distribution. The posterior predictive given specific component assignments $c_t$ (i.e. given a specific Gibbs sampling step):
106
+
107
+ $$
108
+ p \left(\boldsymbol {x} ^ {*} \mid \mathcal {D}, \boldsymbol {c} _ {t}\right) = \int p \left(\boldsymbol {x} ^ {*} \mid \boldsymbol {\mu}, \boldsymbol {\Sigma}, \boldsymbol {c} _ {t}\right) p \left(\boldsymbol {\mu}, \boldsymbol {\Sigma} \mid \mathcal {D}, \boldsymbol {c} _ {t}\right) d \boldsymbol {\mu} d \boldsymbol {\Sigma}
109
+ $$
110
+
111
+ has a closed form solution (see Appendix B for details). The posterior predictive $p(\pmb{x}^* \mid \mathcal{D})$ is an expectation over component assignments and can be approximated by sampling steps from the Markov chain:
112
+
113
+ $$
114
+ p \left(\boldsymbol {x} ^ {*} \mid \mathcal {D}\right) = \int p \left(\boldsymbol {x} ^ {*} \mid \mathcal {D}, \boldsymbol {c}\right) p (\boldsymbol {c}) d \boldsymbol {c} \approx \frac {1}{T} \sum_ {t = 1} ^ {T} p \left(\boldsymbol {x} ^ {*} \mid \mathcal {D}, \boldsymbol {c} _ {t}\right). \tag {6}
115
+ $$
116
+
117
+ # 3 Quantifying complexity and structure of posterior distributions
118
+
119
+ We use DP-GMM posterior predictive distributions to compare neural networks with respect to their representational complexity. To this end, we approximate a relative entropy between the posterior predictive $p(\pmb{x}^* \mid \mathcal{D})$ and a chosen least assumption distribution $m(\pmb{x}^*)$ , i.e. the Kullback-Leibler (KL) divergence $D_{\mathrm{KL}}(p \parallel m)$ . From an information theory point of view, this relative entropy can be seen as a measure of inefficiency of approximating the posterior predictive with $m(\pmb{x}^*)$ . Alternatively, $D_{\mathrm{KL}}(p \parallel m)$ can be seen as an information gain from observing many samples from $p(\pmb{x}^* \mid \mathcal{D})$ while assuming $m(\pmb{x}^*)$ prior. The measure obviously depends on the choice of $m(\pmb{x}^*)$ . We pick $m(\pmb{x}^*)$ to be the maximum differential entropy distribution that captures mean of the data and variance in each dimension. That is, we choose the least assumption distribution to be a multivariate Gaussian with the mean and the diagonal covariance matrix estimated from $\mathcal{D}$ .<sup>3</sup>
120
+
121
+ We do not have a closed-form expression for the relative entropy $D_{\mathrm{KL}}(p \mid m)$ . Fortunately, we can easily draw samples from the posterior predictive $p(\boldsymbol{x}^* \mid \mathcal{D})$ by first sampling a step from the CGS chain and then sampling from the posterior predictive given the component assignment (Eqn. 6). This gives us a Monte Carlo approximation to the relative entropy:
122
+
123
+ $$
124
+ D _ {\mathrm {K L}} (p \mid m) \approx \frac {1}{T S} \sum_ {t = 1} ^ {T} \sum_ {s = 1} ^ {S} \left[ \log p \left(\boldsymbol {x} _ {s t} ^ {*} \mid \mathcal {D}, \boldsymbol {c} _ {t}\right) - \log m \left(\boldsymbol {x} _ {s t} ^ {*}\right) \right], \quad \boldsymbol {x} _ {s t} ^ {*} \sim p \left(\boldsymbol {x} ^ {*} \mid \mathcal {D}, \boldsymbol {c} _ {t}\right). \tag {7}
125
+ $$
126
+
127
+ When modelling aggregated posteriors in VAEs we are also interested to what extent dimensions in the latent code are independent. To gauge the degree of dependency between latent dimensions, we estimate the total correlation between dimensions in posterior predictive. That is, we approximate the KL divergence between the full posterior predictive $p(z^{*} \mid \mathcal{D})$ and its dimensions-independent version:
128
+
129
+ $$
130
+ p _ {i n d} \left(\boldsymbol {z} ^ {*} \mid \mathcal {D}\right) = \prod_ {i = 1} ^ {D} p \left(z _ {i} ^ {*} \mid \mathcal {D}\right). \tag {8}
131
+ $$
132
+
133
+ Note that $p_{ind}(z^{*} \mid \mathcal{D})$ is simply a product of marginals distribution. Again, KL divergence between $p$ and $p_{ind}$ has no closed-form solution. However, note that posterior predictive density $p(z^{*} \mid \mathcal{D}, c)$ is a mixture of Student's t-distributions. Because marginals of a Student's t-distribution are also Student's t-distributions, $p(z_{i}^{*} \mid \mathcal{D}, c)$ can be expressed as a simple mixture:
134
+
135
+ $$
136
+ p \left(z _ {i} ^ {*} \mid \mathcal {D}, \boldsymbol {c}\right) = \sum_ {k = 1} ^ {K} \alpha_ {k} \int S t \left(\boldsymbol {z} ^ {*} \mid \boldsymbol {\mu} _ {k}, \boldsymbol {\Sigma} _ {k}, \nu_ {k}\right) d \boldsymbol {z} _ {- i} ^ {*} = \sum_ {k = 1} ^ {K} \alpha_ {k} S t \left(z _ {i} ^ {*} \mid \mu_ {k} ^ {i}, \Sigma_ {k} ^ {i i}, \nu_ {k}\right). \tag {9}
137
+ $$
138
+
139
+ We can leverage this density to approximate $D_{\mathrm{KL}}(p \parallel p_{ind})$ with samples from the Markov chain:
140
+
141
+ $$
142
+ D _ {\mathrm {K L}} (p \mid p _ {i n d}) \approx \frac {1}{T S} \sum_ {t = 1} ^ {T} \sum_ {s = 1} ^ {S} \left[ \log p \left(\boldsymbol {z} _ {s t} ^ {*} \mid \mathcal {D}, \boldsymbol {c} _ {t}\right) - \log p _ {i n d} \left(\boldsymbol {z} _ {s t} ^ {*} \mid \mathcal {D}, \boldsymbol {c} _ {t}\right) \right], \quad \boldsymbol {z} _ {s t} ^ {*} \sim p \left(\boldsymbol {z} ^ {*} \mid \mathcal {D}, \boldsymbol {c} _ {t}\right). \tag {10}
143
+ $$
144
+
145
+ To approximate divergences in Eqn. 7 and 10 we perform 2,000 CGS steps. Next, we throw away the first 1,000 steps and thin the remaining part of the chain by taking every 20-th Gibbs step. We then calculate mean, minimum and maximum KL divergence across remaining Gibbs steps. In each step we take $10^{5}$ samples from the posterior predictive.
146
+
147
+ # 4 Representational complexity in Convolutional Networks
148
+
149
+ Experimental setup. First, we employ DP-GMMs to investigate representational complexity in CNNs that can exploit patterns in data and networks that are forced to memorize random labels. We also compare models with different depths, widths and regularization techniques. To this end, we train several CNN architectures on CIFAR-10 and Mini-ImageNet datasets<sup>4</sup>. Each network is trained with ground-truth labels and with a variant of the dataset in which labels were randomly permuted (further referred to as memorizing nets). All memorizing nets are trained on the same fixed random permutation of labels. Furthermore, when fitting true labels we train networks with no additional regularization, with image augmentation, with dropout and with both regularizers. See Appendix C for details on the datasets, network architectures and training hyper-parameters.
150
+
151
+ For each combination of a CNN architecture, label set and regularization, we train 50 networks starting from different random initializations and pool together their kernel representations (Section 2). One important choice when constructing CNN representations is the number of input images used to calculate kernel responses (Eqn. 2). On one hand, vector of kernel activations should form a distinct fingerprint of the learned feature. On the other hand, difficulty of estimating DP-GMM parameters increases with the dimensionality of representations. In practice we first collect kernel responses over the entire test part of the dataset. Assuming $l$ test images, a layer with $m$ kernels and $n$ trained networks, we obtain an $\mathbf{A}_{n \cdot m \times l}$ matrix with kernel representations. We then reduce the dimensionality of representations $(l)$ by performing a Singular Value Decomposition of $\mathbf{A}$ and keeping only $d$ right-singular vectors with the largest singular values. We found that retaining up to 80 singular vectors is sufficient to uncover differences in posterior distributions of kernel representations. We retain equal number of singular vectors when comparing layers trained under different scenarios.
152
+
153
+ ![](images/af7b0e05a9efefc4c3a42daf5db9e53046939ce71721c4598991d0dd4285a93e.jpg)
154
+
155
+ ![](images/0d92799bdbb4de2840abb4b3f2a41c629eaae0b32c0ff8e3b7fdf6de47e1e470.jpg)
156
+
157
+ ![](images/484ee96f2bed52da7b980c66a07db4c04e5fb8f549127970d9c3329facf683dc.jpg)
158
+
159
+ ![](images/807da534b36d2dae8ea032726c76b39d6ecd35533a8e317f6dbd2af8231626a1.jpg)
160
+
161
+ Figure 1: Relative entropies of posterior predictive distributions for CNN representations. Results are reported for true and randomly permuted labels, including dropout and image augmentation in the former case. In each case we report mean, minimum and maximum relative entropy across averaged Gibbs steps. In plot title, CNN AxB refers to a CNN with depth A and width B in the final conv layer.
162
+ ![](images/2328d37e5afd71762c8d31159219caa3f00c75553ba9057ecb4a71544680e165.jpg)
163
+ - random
164
+
165
+ ![](images/946ab1b14202990fa7993d5299d30140241ca11e592fc7b6f468d418ddd629c6.jpg)
166
+ -王·true
167
+ - true (augmentation)
168
+
169
+ ![](images/1d9dc78aa3dd39cee1a6faa5d1598c5a569153c451a31d45257c9caeb3141568.jpg)
170
+ - true (dropout)
171
+ - true (dropout, augmentation)
172
+
173
+ ![](images/f4c2efae05762742c17c9a5ac0a799c67d47054d1f11117313e96ed43f8dc83f.jpg)
174
+
175
+ Representational complexity. Results from CNN experiments are collected in Fig. 1. Additional results are reported in Appendix C. First, we observe that networks that can exploit patterns in data learn vastly less complex representations than networks forced to memorize, even though in principle both are perfectly capable of memorizing training examples [Zhang et al., 2017]. This finding supports conclusions drawn in [Arpit et al., 2017]. However, we also observe large differences in
176
+
177
+ effects of dropout compared to image augmentation or no regularization: dropout typically increases representational complexity. The extent of this increase depends on the network width, with narrow dropout nets learning representations with complexity more akin to that of memorizing nets. Dropout experiments also illustrate that low representational complexity is not a necessary prerequisite for generalization: while representations in dropout nets are highly sensitive to network initialization, they still form solutions that generalize. Finally, we observe increased representational complexity in middle layers of deep but narrow nets, when trained with no regularization (CNN 11x128 and CNN 11x192 in Appendix C). This is remedied by image augmentation, which behaves consistently across evaluated architectures.
178
+
179
+ # 5 Latent space structure in variational autoencoders
180
+
181
+ Variational autoencoders learn a variational posterior (or inference) distribution $q_{\phi}(z \mid x)$ and a generative distribution $p_{\theta}(x \mid z)$ , by maximizing:
182
+
183
+ $$
184
+ \mathcal {L} _ {\beta} (\boldsymbol {x}, \boldsymbol {\theta}, \phi) = \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x})} [ \log p _ {\boldsymbol {\theta}} (\boldsymbol {x} \mid \boldsymbol {z}) ] - \beta f (q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x}), p (\boldsymbol {z})) \tag {11}
185
+ $$
186
+
187
+ under a suitable divergence measure $f(q,p)$ between the posterior $q$ and prior $p$ . In the standard VAE model $f(q,p)$ corresponds to the KL divergence, and $\beta = 1$ . In such settings objective in Eqn. 11 is equivalent to the evidence lower bound on intractable data likelihood [Kingma and Welling, 2014]. Recently, however, there is an increasing interest in alternative formulations. Higgins et al. [2017] and Burgess et al. [2018] investigated VAEs with $\beta > 1$ and observed that such $\beta$ -VAEs tend to learn disentangled latent codes $z$ , i.e. codes where individual dimensions capture semantically meaningful properties of observations. Chen et al. [2017] suggests that $D_{\mathrm{KL}}$ can be a too restrictive regularization and may cause the model to learn uninformative latent codes. Zhao et al. [2017, 2019] studied VAEs with an alternative regularization, namely Maximum Mean Discrepancy (MMD) divergence [Gretton et al., 2012]. MMD was also investigated by Tolstikhin et al. [2018] in the context of Wasserstein autoencoders. Shortly, given a positive-definite kernel $k: \mathcal{Z} \times \mathcal{Z} \to \mathbb{R}$ , MMD between two probability distributions $P$ and $Q$ on $\mathcal{Z}$ is a distance between their kernel mean embeddings. MMD has an unbiased estimator [Gretton et al., 2012] that easily integrates with gradient-based training:
188
+
189
+ $$
190
+ M M D _ {k} \left(P _ {Z}, Q _ {Z}\right) \approx \frac {1}{n (n - 1)} \sum_ {\substack {i, j = 1 \\ i \neq j}} ^ {n} \left[ k \left(z _ {i} ^ {p}, z _ {j} ^ {p}\right) + k \left(z _ {i} ^ {q}, z _ {j} ^ {q}\right) \right] - \frac {2}{n ^ {2}} \sum_ {i, j = 1} ^ {n} k \left(z _ {i} ^ {p}, z _ {j} ^ {q}\right), \tag{12}
191
+ $$
192
+
193
+ where $\{z_i^p\}_{i = 1}^n$ $\{z_i^q\}_{i = 1}^n$ are samples from $P$ and $Q$ , respectively.
194
+
195
+ In this section we leverage DP-GMMs to investigate aggregated posteriors (Eqn. 1) learned by VAEs across a range of $\beta$ values for both standard $D_{\mathrm{KL}}$ and MMD regularizations. This gives us a view into the structure of the latent spaces in these models. Additional results are reported in Appendix D.
196
+
197
+ ![](images/3be65165528c2e6753f36f5136f28db33c651f5733b11c7aa182978452d8b4bb.jpg)
198
+ Figure 2: Relative entropies (left) and total correlations (right) in posterior predictive distributions for latent codes in $\beta$ -VAEs and MMD-VAEs across a range of $\beta$ values. In each case we report mean, minimum and maximum estimate across averaged Gibbs steps.
199
+
200
+ ![](images/efa0f90624391d91ca42d33d8fa514cd5ae9145ca95b3e8dc065aa6176099d07.jpg)
201
+
202
+ ![](images/5e391384d42af83bc7ef846f98035d688ef3d3db6a75bc2108e8475006601b96.jpg)
203
+
204
+ ![](images/d801117df6ace691d475fd22384590740d9e0d85e5fda2f61f4e8bcf96495a60.jpg)
205
+
206
+ Experimental setup. All experiments were carried out on CelebA [Liu et al., 2015] and $\mathrm{Anime}^5$ datasets consisting of images of human and animated character faces, respectively. Training protocols and network architectures follow those in Tolstikhin et al. [2018], particularly we learn latent codes with $d = 64$ dimensions and use inverse multiquadratics kernel in MMD-VAEs. See Appendix D for more details on dataset preparation and training hyper-parameters.
207
+
208
+ ![](images/ee2e897b4891833bd1db0ba6d2cb0bdd3d523e92c736fcfd534e23850c9ded45.jpg)
209
+ Figure 3: Samples generated with latent codes drawn from either the joint predictive density $p(z^{*})$ (top) or product of marginals density $p_{ind}(z^{*})$ (bottom) for VAEs trained on CelebA dataset.
210
+
211
+ After training a given model, we sample latent codes for the entire test part of the respective dataset and estimate a DP-GMM model for the set of sampled codes: $\mathcal{D}_z = \{z_1,z_2,\dots ,z_n\}$ . This gives us a CGS trace from which we can recover the posterior predictive $p(z^{*}|\mathcal{D}_{z})$ over the latent space learned by this particular VAE. We use this inferred distributions as proxies to investigate aggregated posteriors. For notational simplicity we will drop conditioning on $\mathcal{D}_z$ in the analysis below, and simply write $p(z^{*})$ for the DP-GMM posterior predictive.
212
+
213
+ # 5.1 Latent space learned by $\beta$ -VAEs and MMD-VAEs
214
+
215
+ We explore latent representations learned by VAEs in two ways. First, we quantify complexity of learned representations via relative entropies in posterior predictive distributions (Eqn. 7). Next, in order to investigate the degree of dependency between latent dimensions, we approximate total correlations between dimensions in posterior predictive densities (Eqn. 10).
216
+
217
+ Effects of $\beta$ regularization on the aggregated posterior. Fig. 2 (left) shows relationship between $\beta$ value and the complexity of latent representations learned by standard $\beta$ -VAEs and MMD-VAEs. This result demonstrates that $\beta$ has particularly strong regularizing effect on the aggregated posterior in standard $\beta$ -VAEs: distribution of latent codes in this model rapidly simplifies as $\beta$ coefficient grows. For $\beta > 10$ , aggregated posterior becomes almost indistinguishable from a diagonal multivariate normal distribution with mean and variance estimated from $\mathcal{D}_z$ (i.e. the lest-assumption distribution in the construction of relative entropy). In other words, posterior in $\beta$ -VAEs with strong regularization collapses to the prior. Regularization is much weaker under MMD divergence, where relative entropies indicate rich latent space even with large $\beta$ values ( $\beta = 1000$ ).
218
+
219
+ Independence of latent dimensions. $\beta$ -VAEs were observed to learn disentangled representations when trained with large $\beta$ values [Higgins et al., 2017]. Here we leverage posterior predictive $p(z^{*})$ to investigate influence of large $\beta$ on the covariance structure of the aggregated posterior $q_{\phi}(z)$ . Fig. 2 (right) demonstrates that latent dimensions in standard $\beta$ -VAEs decorrelate with increasing $\beta$ value: joint predictive density over latent codes becomes indistinguishable from its product of marginals approximation. This agrees with the disentanglement phenomenon observed in these models. MMD-VAEs, on the other, hand keep their latent codes relatively correlated, even with strong regularization.
220
+
221
+ To further illustrate how $\beta$ regularization affects coupling between latent dimensions, we also sampled VAE observations with latent codes drawn either from a joint posterior predictive $p(z^{*})$ or a product of marginals density $p_{ind}(z^{*})$ (Eqn. 8). Samples from MMD-VAEs and standard $\beta$ -VAEs trained with small $\beta$ often degrade when dependence between latent dimensions is dropped (Fig. 3). In a strongly regularized $\beta$ -VAE samples from the joint and the product of marginals distributions are indistinguishable, but a simplistic latent space translates to low sample fidelity and diversity. Overall, our results show that disentanglement in $\beta$ -VAEs comes at the cost of reduced representational capacity.
222
+
223
+ ![](images/7f8fceaeb2cab9954c5909c73b3e142f727a4332518d9232fd035bce7d0684f8.jpg)
224
+ Figure 4: MMD-VAE samples generated with latent codes drawn from either the prior $p(z)$ (top) or DP-GMM posterior predictive $p(z^{*} \mid c)$ (bottom). Results for models trained on CelebA dataset.
225
+
226
+ # 5.2 Improving samples from VAEs with rich posteriors
227
+
228
+ Results presented above show that aggregated posteriors in MMD-VAEs diverge significantly from the prior. This suggests that sampling in MMD-VAEs can be improved by drawing latent codes from an approximation to $q_{\phi}(z)$ , rather than the prior $p(z)$ . In fact, posterior predictive given component assignments $p(z^{*} \mid c)$ is a natural choice for such approximation. First, it admits an efficient ancestral sampling, where we first sample a component and then the latent code. Second, given flexibility of DP-GMMs, we may expect that after initial burn-in period mixtures in the chain will be well adapted to $q_{\phi}(z)$ . Figure 4 compares this sampling scheme with a standard sampling from the prior. Clearly, sampling latent codes from a mixture $p(z^{*} \mid c)$ significantly improves quality of image samples. Note also that large $\beta$ term only partially remedies issues with samples generated from the prior. We could also sample from the full posterior predictive by first sampling a step from the Markov chain. This could further improve sample diversity at the cost of storing more posterior parameters.
229
+
230
+ # 6 Related work
231
+
232
+ Several recent works explored similarity of representations learned by neural networks. Raghu et al. [2017] construct neurons' representations as vectors of their responses over a fixed set of inputs. This differs from a typical notion of a neural representation understood as a vector of activations in a network layer given a single input example. They show that representations learned by networks trained from different initializations exhibit similarity in canonical directions. A follow up work by Morcos et al. [2018] proposes an alternative way to subsume correlation in canonical directions. They study similarity of neural representations in memorizing and learning networks, compare similarity of representations in wide and narrow networks and investigate training dynamics in RNNs. More recently, Kornblith et al. [2019] proposed a kernel-based similarity index that more reliably captures correspondence between network representations. This allowed them, among others, to pinpoint depth-related pathologies in convolutional networks. The main difference between these works and our approach is that we do not seek to construct a similarity score for pairs of layer representations. We instead investigate distributions of neural representations learned across many trained networks and study aggregated posteriors in deep generative models. Rather than focusing mainly on network similarity, our goal is to compare networks with respect to the complexity of effectively learnable representations or structure of the learned latent space. This requires a more flexible tool than a similarity score, which in our case is a nonparametric mixture model. A work more akin to ours was presented by Montavon et al. [2011], whose aim was to verify whether successive network layers construct representations that are increasingly good at solving the underlying task. Still, their analysis sheds no light on the complexity of the set of representations that can be effectively reached by a specific network architecture and training regime.
233
+
234
+ Our work also touches on the effects of memorization on learned representations. Zhang et al. [2017] demonstrate that neural networks easily memorize random assignment of labels and random input examples. An immediate conclusion from this work is that priors encoded in current network architectures are not a factor that could prevent memorization. If so, then is the observed efficacy of neural networks actually due to learning patterns in data? Arpit et al. [2017] compare how memorizing networks and networks trained on real data fit input examples. They demonstrate that the latter fit
235
+
236
+ simple examples first. They also show that memorizing networks have more complex decision boundaries. Wilson and Izmailov [2020] demonstrate that memorization of images with random labels can be replicated with Gaussian processes. They then discuss generalization from a perspective of priors over functions that are encoded by composing model architectures with priors over their parameters. They argue that for CNNs these prior distributions concentrate on functions that exploit patterns in data, and attribute memorization to non-zero prior density for random label assignments. In particular, they demonstrate that a simple CNN with random weights induce a covariance structure in MNIST images that correlates with ground-truth labels. We contribute to this line of research by demonstrating that the set of representations that are effectively constructed by memorizing networks is more complex than the set of representations constructed by networks that learn on true data. This shows that CNNs that can exploit patterns in data converge do different solutions than memorizing nets, despite no difference in architecture, regularization or training hyper-parameters.
237
+
238
+ Our results demonstrate that disentanglement in standard $\beta$ -VAEs comes with a simplistic aggregated posterior, which translates to reduced fidelity and diversity of samples. Gao et al. [2019] investigate learning of disentangled representations in a Correlation Explanation (CorEx) framework [Steeg and Galstyan, 2014]. Their basic idea is to learn a parametrized probability distribution $p_{\theta}(\mathbf{x},\mathbf{z})$ which jointly maximizes the total correlation in $\mathbf{x}$ that is explained by the latent code $\mathbf{z}$ and minimizes total correlation in the latent code itself. Gao et al. formulate a variational lower bound to CorEx and show that under certain assumptions it is equivalent to ELBO in VAEs. From this perspective, $\beta$ regularization controls the contribution of mutual information between observations and latent dimensions to the optimization objective. Gao et al. also propose to improve samples in their model by drawing latent codes from a factorial approximation to the aggregated posterior. Our empirical results for standard $\beta$ -VAEs are compatible with Gao et al. findings. That said, our framework can also be used to investigate aggregated posteriors in VAEs with non-standard divergences, such as MMD-VAEs. In these models a factorial approximation to the aggregated posterior yields poor samples, which we remedy by approximating the posterior with a Gaussian mixture.
239
+
240
+ While in this work we compare distributions of neural representations via relative entropies, one could argue that the number of components in a posterior distribution is itself a useful proxy to representational complexity. For example, sample complexity of learning a Gaussian mixture is linear (up to a poly-logarithmic factor) in the number of components [Ashtiani et al., 2018]. Note, however, that Dirichlet Process prior is not a suitable tool for recovering component counts in mixture distributions. Dirichlet Process is a prior on infinite mixtures and will not concentrate on a finite number of components in the infinite data limit [Miller and Harrison, 2013, 2014]. One can obtain consistency for the number of components with a suitable prior on finite mixtures [Miller and Harrison, 2018]. Still, analysis of component counts comes with caveats. It assumes that observations actually come from a finite mixture and that the form of the components' distribution is know - a fairly strong assumptions for a complex generative process behind neural representations. For these reasons we draw our conclusions from predictive densities, not component counts.
241
+
242
+ # 7 Conclusions
243
+
244
+ We presented a Bayesian Nonparametrics framework for investigating neural representations. The main strength of this probabilistic approach is that it allows us to investigate representations that are effectively reachable by gradient-based training, rather than quantifying only the theoretical model complexity. We used it to compare complexity of representations learned by CNNs and to explore structure of latent spaces learned by VAEs. Our results show marked differences between memorizing networks and networks that learn on true data, as well as between two form of regularization, namely dropout and image augmentation. Finally, we showed marked differences between standard $\beta$ -VAEs and MMD-VAEs with respect to their ability to represent diverse image features in the latent space.
245
+
246
+ Our complexity analysis may have direct applications in development of latent variable generative models. First, it enables model comparison with respect to the capacity of the learned latent space. Second, we show that Gaussian mixtures can be used to improve samples from models with rich posteriors. Our results may also have immediate applications in interpretability research. A number of interpretation methods attempt explanation by capturing semantics of network units [Gilpin et al., 2018]. However, we uncover cases, such as dropout nets, where learned representations are sensitive to network initialization, raising doubts whether capturing semantics of network units is useful in these settings.
247
+
248
+ # 8 Broader Impact
249
+
250
+ This work has direct applications in deep generative models. Probabilistic models of latent spaces may inform development of architectures and training methods that improve sample fidelity and control over sample semantics. While generative modelling have many positive applications – e.g. in computer aided art and conversational systems – any work on generative models may potentially be used to produce deceptive and fraudulent content. This work also adds to the evidence that convolutional networks excel at exploiting patterns in data. However, it is important to recognize that our results do not speak to the issue of biases that may be inherited from training examples. In particular, undue trust in data-driven systems – including neural networks – runs the risk of reinforcing biases and prejudice existing in training data.
251
+
252
+ # Acknowledgements
253
+
254
+ Research presented in this work was supported by funds assigned to AGH University of Science and Technology by the Polish Ministry of Science and Higher Education. This research was supported in part by PL-Grid Infrastructure.
255
+
256
+ # References
257
+
258
+ Devansh Arpit, Stanisław Jastrzebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron C. Courville, Yoshua Bengio, and Simon Lacoste-Julien. A closer look at memorization in deep networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 233-242, 2017.
259
+ Hassan Ashtiani, Shai Ben-David, Nicholas J. A. Harvey, Christopher Liaw, Abbas Mehrabian, and Yaniv Plan. Nearly tight sample complexity bounds for learning mixtures of gaussians via sample compression schemes. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 3-8 December 2018, Montreal, Canada, pages 3416-3425, 2018.
260
+ Christopher P. Burgess, Irina Higgins, Arka Pal, Loic Matthew, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in $\beta$ -VAE. arXiv preprint arXiv:1804.03599, 2018.
261
+ Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. Variational lossy autoencoder. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings, 2017.
262
+ Shuyang Gao, Rob Brekelmans, Greg Ver Steeg, and Aram Galstyan. Auto-encoding total correlation explanation. In The 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019, 16-18 April 2019, Naha, Okinawa, Japan, pages 1157-1166, 2019.
263
+ Subhashis Ghosal and Aad Van der Vaart. Fundamentals of nonparametric Bayesian inference. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2017.
264
+ L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. Specter, and L. Kagal. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pages 80-89, 2018.
265
+ Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf, and Alexander J. Smola. A kernel two-sample test. Journal of Machine Learning Research, 13(25):723-773, 2012.
266
+ Irina Higgins, Loic Matthew, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. $\beta$ -VAE: Learning basic visual concepts with a constrained variational framework. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017.
267
+
268
+ Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014.
269
+ Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey E. Hinton. Similarity of neural network representations revisited. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 3519-3529, 2019.
270
+ Jun S Liu, Wing Hung Wong, and Augustine Kong. Covariance structure of the gibbs sampler with applications to the comparisons of estimators and augmentation schemes. Biometrika, 81(1):27-40, 1994.
271
+ Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 3730-3738. IEEE Computer Society, 2015.
272
+ Jeffrey W. Miller and Matthew T. Harrison. A simple example of dirichlet process mixture inconsistency for the number of components. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 199-206, 2013.
273
+ Jeffrey W. Miller and Matthew T. Harrison. Inconsistency of pitman-yor process mixtures for the number of components. Journal of Machine Learning Research, 15(96):3333-3370, 2014.
274
+ Jeffrey W. Miller and Matthew T. Harrison. Mixture models with a prior on the number of components. Journal of the American Statistical Association, 113(521):340-356, 2018.
275
+ Gregoire Montavon, Mikio L. Braun, and Klaus-Robert Muller. Kernel analysis of deep networks. Journal of Machine Learning Research, 12(78):2563-2581, 2011.
276
+ Ari S. Morcos, Maithra Raghu, and Samy Bengio. Insights on representational similarity in neural networks with canonical correlation. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 3-8 December 2018, Montreal, Canada, pages 5732-5741, 2018.
277
+ Radford M Neal. Markov chain sampling methods for dirichlet process mixture models. Journal of computational and graphical statistics, 9(2):249-265, 2000.
278
+ Maithra Raghu, Justin Gilmer, Jason Yosinski, and Jascha Sohl-Dickstein. SVCCA: singular vector canonical correlation analysis for deep learning dynamics and interpretability. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 6076-6085, 2017.
279
+ Greg Ver Steeg and Aram Galstyan. Discovering structure in high-dimensional data through correlation explanation. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 577-585, 2014.
280
+ Ilya O. Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schölkopf. Wasserstein autoencoders. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, 2018.
281
+ Andrew Gordon Wilson and Pavel Izmailov. Bayesian deep learning and a probabilistic perspective of generalization. arXiv preprint arXiv:2002.08791, 2020.
282
+ Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings, 2017.
283
+ Shengjia Zhao, Jiaming Song, and Stefano Ermon. InfoVAE: Information maximizing variational autoencoders. arXiv preprint arXiv:1706.02262, 2017.
284
+ Shengjia Zhao, Jiaming Song, and Stefano Ermon. InfoVAE: Balancing learning and inference in variational autoencoders. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 5885-5892, 2019.
abayesiannonparametricsviewintodeeprepresentations/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51fe74aaa6cb023c8639cc18e452ab324365a31d57737b3ae42819da59024e5c
3
+ size 370382
abayesiannonparametricsviewintodeeprepresentations/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edf9774963664ebed0e7adcd8a61f77649e11188e485c14e74ffc4217869acbe
3
+ size 381944
abayesianperspectiveontrainingspeedandmodelselection/84e1e3b5-a1be-4ac2-99de-64a3ebfb8185_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f04cde4f6615e0f731241ee1aa12d94763ec078dee975c4bd48cfe2b4e8526f5
3
+ size 76974
abayesianperspectiveontrainingspeedandmodelselection/84e1e3b5-a1be-4ac2-99de-64a3ebfb8185_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51945c869e243e1fb17e8f417039204b511932b945d0490f4f38c1781b8dfc72
3
+ size 95673
abayesianperspectiveontrainingspeedandmodelselection/84e1e3b5-a1be-4ac2-99de-64a3ebfb8185_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b519125e2780e7821e71fc5b46157439b12303a847e96cad579212da31231c9
3
+ size 1050267
abayesianperspectiveontrainingspeedandmodelselection/full.md ADDED
@@ -0,0 +1,320 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Bayesian Perspective on Training Speed and Model Selection
2
+
3
+ Clare Lyle
4
+
5
+ Binxin $\mathbf{Ru}^{\dagger}$
6
+
7
+ Lisa Schut†
8
+
9
+ Yarin Gal†
10
+
11
+ Mark van der Wilk†
12
+
13
+ # Abstract
14
+
15
+ We take a Bayesian perspective to illustrate a connection between training speed and the marginal likelihood in linear models. This provides two major insights: first, that a measure of a model's training speed can be used to estimate its marginal likelihood. Second, that this measure, under certain conditions, predicts the relative weighting of models in linear model combinations trained to minimize a regression loss. We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks. We further provide encouraging empirical evidence that the intuition developed in these settings also holds for deep neural networks trained with stochastic gradient descent. Our results suggest a promising new direction towards explaining why neural networks trained with stochastic gradient descent are biased towards functions that generalize well.
16
+
17
+ # 1 Introduction
18
+
19
+ Choosing the right inductive bias for a machine learning model, such as convolutional structure for an image dataset, is critical for good generalization. The problem of model selection concerns itself with identifying good inductive biases for a given dataset. In Bayesian inference, the marginal likelihood (ML) provides a principled tool for model selection. In contrast to cross-validation, for which computing gradients is cumbersome, the ML can be conveniently maximised using gradients when its computation is tractable. Unfortunately, computing the marginal likelihood for complex models such as neural networks is typically intractable. Workarounds such as variational inference suffer from expensive optimization of many parameters in the variational distribution and differ significantly from standard training methods for Deep Neural Networks (DNNs), which optimize a single parameter sample from initialization. A method for estimating the ML that closely follows standard optimization schemes would pave the way for new practical model selection procedures, yet remains an open problem.
20
+
21
+ A separate line of work aims to perform model selection by predicting a model's test set performance. This has led to theoretical and empirical results connecting training speed and generalization error [17, 21]. This connection has yet to be fully explained, as most generalization bounds in the literature depend only on the final weights obtained by optimization, rather than on the trajectory taken during training, and therefore are unable to capture this relationship. Understanding the link between training speed, optimization and generalization thus presents a promising step towards developing a theory of generalization which can explain the empirical performance of neural networks.
22
+
23
+ In this work, we show that the above two lines of inquiry are in fact deeply connected. We investigate the connection between the log ML and the sum of predictive log likelihoods of datapoints, conditioned on preceding data in the dataset. This perspective reveals a family of estimators of the log
24
+
25
+ ML which depend only on predictions sampled from the posterior of an iterative Bayesian updating procedure. We study the proposed estimator family in the context of linear models, where we can conclusively analyze its theoretical properties. Leveraging the fact that gradient descent can produce exact posterior samples for linear models [31] and the infinite-width limit of deep neural networks [7, 26], we show that this estimator can be viewed as the sum of a subset of the model's training losses in an iterative optimization procedure. This immediately yields an interpretation of marginal likelihood estimation as measuring a notion of training speed in linear models. We further show that this notion of training speed is predictive of the weight assigned to a model in a linear model combination trained with gradient descent, hinting at a potential explanation for the bias of gradient descent towards models that generalize well in more complex settings.
26
+
27
+ We demonstrate the utility of the estimator through empirical evaluations on a range of model selection problems, confirming that it can effectively approximate the marginal likelihood of a model. Finally, we empirically evaluate whether our theoretical results for linear models may have explanatory power for more complex models. We find that an analogue of our estimator for DNNs trained with stochastic gradient descent is predictive of both final test accuracy and the final weight assigned to the model after training a linear model combination. Our findings in the deep learning setting hint at a promising avenue of future work in explaining the empirical generalization performance of DNNs.
28
+
29
+ # 2 Background and Related Work
30
+
31
+ # 2.1 Bayesian Parameter Inference
32
+
33
+ A Bayesian model $\mathcal{M}$ is defined by a prior distribution over parameters $\theta$ , $P(\theta|\mathcal{M})$ , and a prediction map from parameters $\theta$ to a likelihood over the data $\mathcal{D}$ , $P(\mathcal{D}|\theta, \mathcal{M})$ . Parameter fitting in the Bayesian framework entails finding the posterior distribution $P(\theta|\mathcal{D})$ , which yields robust and principled uncertainty estimates. Though exact inference is possible for certain models like Gaussian processes (GPs) [38], it is intractable for DNNs. Here approximations such as variational inference [4] are used [14, 5, 27, 16, 9], to improve robustness and obtain useful uncertainty estimates.
34
+
35
+ Variational approximations require optimisation over the parameters of the approximate posterior distribution. This optimization over distributions changes the loss landscape, and is significantly slower than the pointwise optimization used in standard DNNs. Pointwise optimization methods inspired by Bayesian posterior sampling can produce similar variation and uncertainty estimates as variational inference, while improving computational efficiency [45, 30, 29]. An appealing example of this is ensembling [25], which works by training a collection models in the usual pointwise manner, starting from $k$ independently initialized points.
36
+
37
+ In the case of linear models, this is exactly equivalent to Bayesian inference, as this sample-then-optimize approach yields exact posterior samples [31, 36]. He et al. [18] extend this approach to obtain posterior samples from DNNs in the infinite-width limit.
38
+
39
+ # 2.2 Bayesian Model Selection
40
+
41
+ In addition to finding model parameters, Bayesian inference can also perform model selection over different inductive biases, which are specified through both model structure (e.g. convolutional vs fully connected) and the prior distribution on parameters. The Bayesian approach relies on finding the posterior over models $P(\mathcal{M}|\mathcal{D})$ , which uses the marginal likelihood (ML) as its likelihood function:
42
+
43
+ $$
44
+ P (\mathcal {D} | \mathcal {M}) = \int_ {\theta} P (\mathcal {D} | \theta) P (\theta | \mathcal {M} _ {i}) d \theta = \mathbb {E} _ {P (\theta | \mathcal {M})} P (\mathcal {D} | \theta). \tag {1}
45
+ $$
46
+
47
+ Instead of computing the full posterior, it is common to select the model with the highest marginal likelihood. This is known as type-II maximum likelihood [27, 28] and is less prone to overfitting than performing maximum likelihood over the parameters and model combined. This is because the marginal likelihood is able to trade off between model fit and model complexity [39]. Maximising the ML is standard procedure when it is easy to compute. For example, in Gaussian processes it used to set simple model parameters like smoothness [38], while recent work has demonstrated that complex inductive biases in the form of invariances can also be learned [44].
48
+
49
+ For many deep models, computing Equation 1 is intractable, and obtaining approximations that are accurate enough for model selection and that scale to complex models is an active area of research [23]. In general, variational lower bounds that scale are too loose when applied to DNNs [5]. Deep
50
+
51
+ Gaussian processes provide a case where the bounds do work [6, 8], but heavy computational load holds performance several years behind deep learning. While ensembling methods provide useful uncertainty estimates and improve the computational efficiency of the variational approach, they have not yet provided a solution for Bayesian model selection.
52
+
53
+ # 2.3 Generalization and Risk Minimization
54
+
55
+ Bayesian model selection addresses a subtly different problem from the risk minimization framework used in many learning problems. Nonetheless, the two are closely related; Germain et al. [15] show that in some cases optimizing a PAC-Bayesian risk bound is equivalent to maximizing the marginal likelihood of a Bayesian model. In practice, maximizing an approximation of the marginal likelihood in DNNs trained with SGD can improve generalization performance [41]. More recently, Arora et al. [11] computed a data-dependent complexity measure which resembles the data-fit term in the marginal likelihood of a Bayesian model and which relates to optimization speed, hinting at a potential connection between the two.
56
+
57
+ At the same time, generalization in deep neural networks (DNNs) remains mysterious, with classical learning-theoretic bounds failing to predict the impressive generalization performance of DNNs [47, 33]. Recent work has shown that DNNs are biased towards functions that are 'simple', for various definitions of simplicity [22, 13, 43, 42]. PAC-Bayesian generalization bounds, which can quantify a broad range of definitions of complexity, can attain non-vacuous values [32, 10, 11], but nonetheless exhibit only modest correlation with generalization error [21]. These bounds depend only on the final distribution over parameters after training; promising alternatives consider properties of the trajectory taken by a model during optimization [17, 35]. This trajectory-based perspective is a promising step towards explaining the correlation between the number of training steps required for a model to minimize its objective function and its final generalization performance observed in a broad range of empirical analyses [21, 3, 34, 40].
58
+
59
+ # 3 Marginal Likelihood Estimation with Training Statistics
60
+
61
+ In this section, we investigate the equivalence between the marginal likelihood (ML) and a notion of training speed in models trained with an exact Bayesian updating procedure. For linear models and infinitely wide neural networks, exact Bayesian updating can be done using gradient descent optimisation. For these cases, we derive an estimator of the marginal likelihood which 1) is related to how quickly a model learns from data, 2) only depends on statistics that can be measured during pointwise gradient-based parameter estimation, and 3) becomes tighter for ensembles consisting of multiple parameter samples. We also investigate how gradient-based optimization of a linear model combination can implicitly perform approximate Bayesian model selection in Section 3.3
62
+
63
+ # 3.1 Training Speed and the Marginal Likelihood
64
+
65
+ Let $\mathcal{D}$ denote a dataset of the form $\mathcal{D} = (\mathcal{D}_i)_{i=1}^n = (x_i, y_i)_{i=1}^n$ , and let $\mathcal{D}_{<i} = (\mathcal{D}_j)_{j=1}^{i-1}$ with $\mathcal{D}_{<1} = \emptyset$ . We will abbreviate $P(\mathcal{D}|\mathcal{M}) \coloneqq P(\mathcal{D})$ when considering a single model $\mathcal{M}$ . Observe that $P(\mathcal{D}) = \prod_{i=1}^n P(\mathcal{D}_i|\mathcal{D}_{<i})$ to get the following form of the log marginal likelihood:
66
+
67
+ $$
68
+ \log P (\mathcal {D}) = \log \prod_ {i = 1} ^ {n} P \left(\mathcal {D} _ {i} \mid \mathcal {D} _ {< i}\right) = \sum_ {i = 1} ^ {n} \log P \left(\mathcal {D} _ {i} \mid \mathcal {D} _ {< i}\right) = \sum_ {i = 1} ^ {n} \log \left[ \mathbb {E} _ {P (\theta | \mathcal {D} _ {< i})} P \left(\mathcal {D} _ {i} \mid \theta\right) \right]. \tag {2}
69
+ $$
70
+
71
+ If we define training speed as the number of data points required by a model to form an accurate posterior, then models which train faster – i.e. whose posteriors assign high likelihood to the data after conditioning on only a few data points – will obtain a higher marginal likelihood. Interpreting the negative log posterior predictive probability $\log P(\mathcal{D}_i|\mathcal{D}_{<i})$ of each data point as a loss function, the log ML then takes the form of the sum over the losses incurred by each data point during training, i.e. the area under a training curve defined by a Bayesian updating procedure.
72
+
73
+ # 3.2 Unbiased Estimation of a Lower Bound
74
+
75
+ In practice, computing $\log P(\mathcal{D}_i|\mathcal{D}_{< i})$ may be intractable, necessitating approximate methods to estimate the model evidence. In our analysis, we are interested in estimators of $\log P(\mathcal{D})$ computed
76
+
77
+ by drawing $k$ samples of $\theta \sim P(\theta | \mathcal{D}_{<i})$ for each $i = 1, \dots, n$ . We can directly estimate a lower bound $\mathcal{L}(\mathcal{D}) = \sum_{i=1}^{n} \mathbb{E}[\log P(\mathcal{D}_i | \mathcal{D}_{<i})]$ using the log likelihoods of these samples
78
+
79
+ $$
80
+ \hat {\mathcal {L}} (\mathcal {D}) = \sum_ {i = 1} ^ {n} \frac {1}{k} \sum_ {j = 1} ^ {k} \log P \left(\mathcal {D} _ {i} \mid \theta_ {j} ^ {i}\right). \tag {3}
81
+ $$
82
+
83
+ This will produce a biased estimate of the log marginal likelihood due to Jensen's inequality. We can get a tighter lower bound by first estimating $\mathbb{E}[\log P(\mathcal{D}_i|\theta)]$ using our posterior samples before applying the logarithm, obtaining
84
+
85
+ $$
86
+ \hat {\mathcal {L}} _ {k} (\mathcal {D}) = \sum_ {i = 1} ^ {n} \log \frac {1}{k} \sum_ {j = 1} ^ {k} P \left(\mathcal {D} _ {i} \mid \theta_ {j} ^ {i}\right). \tag {4}
87
+ $$
88
+
89
+ Proposition 3.1. Both $\hat{\mathcal{L}}$ and $\hat{\mathcal{L}}_k$ as defined in Equation4 are estimators of lower bounds on the log marginal likelihood; that is
90
+
91
+ $$
92
+ \mathbb {E} [ \hat {\mathcal {L}} (\mathcal {D}) ] = \mathcal {L} (\mathcal {D}) \leq \log P (\mathcal {D}) \quad \text {a n d} \quad \mathbb {E} [ \hat {\mathcal {L}} _ {k} (\mathcal {D}) ] = \mathcal {L} _ {k} (\mathcal {D}) \leq \log P (\mathcal {D}). \tag {5}
93
+ $$
94
+
95
+ Further, the bias term in $\mathcal{L}$ can be quantified as follows.
96
+
97
+ $$
98
+ \mathcal {L} (\mathcal {D}) = \log P (\mathcal {D}) - \sum_ {i = 1} ^ {n} \mathrm {K L} (P (\theta | \mathcal {D} _ {< i}) | | P (\theta | \mathcal {D} _ {< i + 1})) \tag {6}
99
+ $$
100
+
101
+ We include the proof of this and future results in Appendix A. We observe that both lower bound estimators exhibit decreased variance when using multiple posterior samples; however, $\hat{\mathcal{L}}_k$ also exhibits decreasing bias (with respect to the log ML) as $k$ increases; each $k$ defines a distinct lower bound $\mathcal{L}_k = \mathbb{E}[\hat{\mathcal{L}}_k]$ on $\log P(\mathcal{D})$ . The gap induced by the lower bound $\mathcal{L}(\mathcal{D})$ is characterized by the information gain each data point provides to the model about the posterior, as given by the Kullback-Leibler (KL) divergence [24] between the posterior at time $i$ and the posterior at time $i + 1$ . Thus, while $\mathcal{L}$ has a Bayesian interpretation it is arguably more closely aligned with the minimum description length notion of model complexity [19].
102
+
103
+ When the posterior predictive distribution of our model is Gaussian, we consider a third approach which, unlike the previous two methods, also applies to noiseless models. Let $\mathcal{D} = (X_i,Y_i)_{i=1}^n$ , and $(\theta_j^i)_{j=1}^k$ be $k$ parameter samples from $P(\theta|\mathcal{D}_{<i})$ . We assume a mapping $f: \Theta \times X \to Y$ such that sampling parameters $\theta$ and computing $f(\theta,X_i)$ is equivalent to sampling from the posterior $P(\cdot|\mathcal{D}_{<i},X_i)$ . We can then obtain the following estimator of a lower bound on $\log \mathcal{P}(\mathcal{D})$ .
104
+
105
+ Proposition 3.2. Let $P(Y_{i}|\mathcal{D}_{< i},X_{i}) = \mathcal{N}(\mu_{i},\sigma_{i}^{2})$ for some $\mu_i,\sigma_i^2$ . Define the standard mean and variance estimators $\hat{\mu}_i = \frac{1}{N}\sum_{j = 1}^N f(\theta_j^i,x_i)$ and $\hat{\sigma}_i^2 = \frac{1}{N - 1}\sum (f(\theta_j^i,x_i) - \hat{\mu})^2$ . Then the estimator
106
+
107
+ $$
108
+ \hat {\mathcal {L}} _ {S} (\mathcal {D}) = \sum_ {i = 1} ^ {n} \log P \left(Y _ {i} \mid \hat {\mu} _ {i}, \hat {\sigma} _ {i} ^ {2}\right) \tag {7}
109
+ $$
110
+
111
+ is a lower bound on the log ML: i.e. $\mathbb{E}[\hat{\mathcal{L}}_S(\mathcal{D})]\leq \log P(\mathcal{D})$
112
+
113
+ We provide an empirical evaluation of the rankings provided by the different estimators in Section 4. We find that $\hat{\mathcal{L}}_S$ exhibits the least bias in the presence of limited samples from the posterior, though we emphasize its limitation to Gaussian posteriors; for more general posterior distributions, $\hat{\mathcal{L}}_k$ minimizes bias while still estimating a lower bound.
114
+
115
+ # 3.2.1 Lower bounds via gradient descent trajectories
116
+
117
+ The bounds on the marginal likelihood we introduced in the previous section required samples from the sequence of posteriors as data points were incrementally added $p(\theta | \mathcal{D}_{<i})$ . Ensembles of linear models trained with gradient descent yield samples from the model posterior. We now show that we can use these samples to estimate the log ML using the estimators introduced in the previous section.
118
+
119
+ We will consider the Bayesian linear regression problem of modelling data $\mathcal{D} = (X_i,Y_i)_{i = 1}^n$ assumed to be generated by the process $Y = \theta^{\top}\Phi (X) + \epsilon \sim \mathcal{N}(0,\sigma_N^2 I)$ for some unknown $\theta$ , known $\sigma_N^2$ ,
120
+
121
+ and feature map $\Phi$ . Typically, a Gaussian prior is placed on $\theta$ ; this prior is then updated as data points are seen to obtain a posterior over parameters. In the overparameterised, noiseless linear regression setting, Matthews et al. [31] show that the distribution over parameters $\theta$ obtained by sampling from the prior on $\theta_0$ and running gradient descent to convergence on the data $\mathcal{D}_{<i}$ is equivalent to sampling from the posterior conditioned on $\mathcal{D}_{<i}$ . Osband et al. [36] extend this result to posteriors which include observation noise $\sigma_N^2 \neq 0$ under the assumption that the targets $Y_i$ are themselves noiseless observations.
122
+
123
+ Algorithm 1: Marginal Likelihood Estimation for Linear Models
124
+
125
+ Input: A dataset $\mathcal{D} = (x_i, y_i)_{i=1}^n$ , parameters $\mu_0, \sigma_0^2, \sigma_N^2$
126
+
127
+ Result: An estimate of $\mathcal{L}(\mathcal{D})$
128
+
129
+ $$
130
+ \theta_ {t} \leftarrow \theta_ {0} \sim \mathcal {N} (\mu_ {0}, \sigma_ {0} ^ {2}); \quad \tilde {Y} \leftarrow Y + \epsilon \sim \mathcal {N} (0, \sigma_ {N} ^ {2}); \quad \text {s u m L o s s} \leftarrow 0;
131
+ $$
132
+
133
+ $$
134
+ \ell (\mathcal {D} _ {\leq i}, w) \leftarrow \| \tilde {Y} _ {\leq i} - \theta^ {\top} X _ {\leq i} \| _ {2} ^ {2} + \frac {\sigma_ {N} ^ {2}}{\theta_ {0} ^ {2}} \| \theta - \theta_ {0} \| _ {2} ^ {2};
135
+ $$
136
+
137
+ for $\mathcal{D}_i\in \mathcal{D}$ do
138
+
139
+ $$
140
+ \begin{array}{l} \text {s u m L o s s = s u m L o s s +} \frac {\left(\theta_ {t} ^ {\top} x _ {i} - y _ {i}\right) ^ {2}}{2 \sigma_ {N} ^ {2}} ; \\ \theta_ {t} \leftarrow \text {G r a d i e n t D e s c e n t} (\ell , \theta_ {t}, \mathcal {D} _ {\leq i}) ; \end{array}
141
+ $$
142
+
143
+ end
144
+
145
+ return sumLoss
146
+
147
+ We can use this procedure to obtain posterior samples for our estimators by iteratively running sample-then-optimize on the sets $\mathcal{D}_{< i}$ . Algorithm outlines our approach, which uses sample-then-optimize on iterative subsets of the data to obtain the necessary posterior samples for our estimator. Theorem 3.3 shows that this procedure yields an unbiased estimate of $\mathcal{L}(\mathcal{D})$ when a single prior sample is used, and an unbiased estimate of $\mathcal{L}_k(\mathcal{D})$ when an ensemble of $k$ models are trained in parallel.
148
+
149
+ Theorem 3.3. Let $\mathcal{D} = (X_i,Y_i)_{i = 1}^n$ and let $(\theta_j^i)_{i,j = 1}^{n,J}$ be generated by the procedure outlined above. Then the estimators $\hat{\mathcal{L}},\hat{\mathcal{L}}_S,$ and $\hat{\mathcal{L}}_k$ , applied to the collection $(\theta_j^i)$ , are lower bounds on $\log P(\mathcal{D})$ Further, expressing $-\log P(\mathcal{D}_i|\theta)$ as the $\ell_2$ regression loss plus a constant, we then obtain
150
+
151
+ $$
152
+ \log P (\mathcal {D}) \geq \sum_ {i = 1} ^ {n} \mathbb {E} _ {\theta_ {i} \sim P (\cdot | \mathcal {D} _ {< i})} [ \log P (\mathcal {D} _ {i} | \theta_ {i}) ] = \mathbb {E} \sum_ {i = 1} ^ {n} - \ell_ {2} (\mathcal {D} _ {i}, \theta_ {i}) + c = \mathcal {L} (\mathcal {D}) \tag {8}
153
+ $$
154
+
155
+ We highlight that Theorem 3.3 precisely characterizes the lower bound on the marginal likelihood as a sum of 'training losses' based on the regression loss $\ell_2(\mathcal{D}_i, \theta_i)$ .
156
+
157
+ # 3.2.2 From Linear Models to Infinite Neural Networks
158
+
159
+ Beyond linear models, our estimators can further perform model selection in the infinite-width limit of neural networks. Using the optimization procedure described by He et al. [18], we can obtain an exact posterior sample from a GP given by the neural tangent kernel [20]. The iterative training procedure described in Algorithm 1 will thus yield a lower bound on the marginal likelihood of this GP using sampled losses from the optimization trajectory of the neural network. We evaluate this bound in Section 4 and formalize this argument in the following corollary.
160
+
161
+ Corollary 3.4. Let $\mathcal{D}$ be a dataset indexed by our standard notation. Let $f_{0}$ be sampled from an infinitely wide neural network architecture $\mathcal{F}$ under some initialization distribution, and let $f_{\infty}^{i}$ be the limiting solution under the training dynamics defined by He et al. [18] applied to the initialization $f_{0}$ and using data $\mathcal{D}_{< i}$ . Let $K_{\infty}$ denote the neural tangent kernel for $\mathcal{F}$ , and $\mathcal{M} = GP(0, K_{\infty})$ the induced Gaussian Process. Then $f_{\infty}^{i} \sim P(f| \mathcal{D}_{< i}, \mathcal{M})$ , and in the limit of infinite training time, the iterative sample-then-optimize procedure yields an unbiased estimate of $\mathcal{L}(\mathcal{D}|\mathcal{M})$ . Letting $\ell_{2}$ denote the scaled squared $\ell_{2}$ regression loss and $c$ be a constant, we obtain as a direct corollary of Theorem 3.3
162
+
163
+ $$
164
+ P (\mathcal {D}) \geq \mathbb {E} _ {f _ {\infty} ^ {i} \sim P (\cdot | \mathcal {D} _ {< i})} [ \log P (\mathcal {D} _ {i} | \theta_ {i}) ] = \mathbb {E} \sum_ {i = 1} ^ {n} - \ell_ {2} (\mathcal {D} _ {i}, f _ {i}) + c = \mathcal {L} (\mathcal {D}). \tag {9}
165
+ $$
166
+
167
+ This result provides an additional view on the link between training speed and generalisation in wide neural networks noted by Arora et al. [1], who analysed the convergence of gradient descent. They
168
+
169
+ compute a PAC generalization bound which a features the data complexity term equal to that in the marginal likelihood of a Gaussian process Rasmussen [38]. This term provides a bound on the rate of convergence of gradient descent, whereas our notion of training speed is more closely related to sample complexity and makes the connection to the marginal likelihood more explicit.
170
+
171
+ It is natural to ask if such a Bayesian interpretation of the sum over training losses can be extended to non-linear models trained with stochastic gradient descent. Although SGD lacks the exact posterior sampling interpretation of our algorithm, we conjecture a similar underlying mechanism connecting the sum over training losses and generalization. Just as the marginal likelihood measures how well model updates based on previous data points generalize to a new unseen data point, the sum of training losses measures how well parameter updates based on one mini-batch generalize to the rest of the training data. If the update generalizes well, we expect to see a sharper decrease in the training loss, i.e. for the model to train more quickly and exhibit a lower sum over training losses. This intuition can be related to the notion of 'stiffness' proposed by Fort et al. [12]. We provide empirical evidence supporting our hypothesis in Section 4.2
172
+
173
+ # 3.3 Bayesian Model Selection and Optimization
174
+
175
+ The estimator $\mathcal{L}(\mathcal{D})$ reveals an intriguing connection between pruning in linear model combinations and Bayesian model selection. We assume a data set $\mathcal{D} = (X_i, Y_i)_{i=1}^n$ and a collection of $k$ models $\mathcal{M}_1, \ldots, \mathcal{M}_k$ . A linear regressor $w$ is trained to fit the posterior predictive distributions of the models to the target $Y_i$ ; i.e. to regress on the dataset
176
+
177
+ $$
178
+ (\Phi , Y) = \left(\phi_ {i} = \left(\hat {Y} _ {1} ^ {i}, \dots , \hat {Y} _ {n} ^ {i}\right), Y _ {i}\right) _ {i = 1} ^ {n} \text {w i t h} \hat {Y} _ {j} ^ {i} \sim P (\hat {Y} | \mathcal {D} _ {< i}, X _ {i}, \mathcal {M} _ {j}). \tag {10}
179
+ $$
180
+
181
+ The following result shows that the optimal linear regressor on this data generating distribution assigns the highest weight to the model with the highest $\mathcal{L}(\mathcal{D})$ whenever the model errors are independent. This shows that magnitude pruning in a linear model combination is equivalent to approximate Bayesian model selection, under certain assumptions on the models.
182
+
183
+ Proposition 3.5. Let $\mathcal{M}_1, \ldots, \mathcal{M}_k$ be Bayesian linear regression models with fixed noise variance $\sigma_N^2$ and Gaussian likelihoods. Let $\Phi$ be a (random) matrix of posterior prediction samples, of the form $\Phi[i,j] = \hat{y}_i^j \sim P(y_j|\mathcal{D}_{<j}, x_j, \mathcal{M}_i)$ . Suppose the following two conditions on the columns of $\Phi$ are satisfied: $\mathbb{E}\langle \Phi(:, i], y \rangle = \mathbb{E}\langle \Phi(:, j], y \rangle$ for all $i, j$ , and $\mathbb{E}\langle \Pi_{y^\perp} \phi_i, \Pi_{y^\perp} \phi_j \rangle = 0$ . Let $w^*$ denote the least-squares solution to the regression problem $\min_w \mathbb{E}_\Phi \| \Phi w - y \|^2$ . Then the following holds
184
+
185
+ $$
186
+ \underset {i} {\arg \max } w _ {i} ^ {*} = \underset {i} {\arg \max } \mathcal {L} (\mathcal {D} | \mathcal {M} _ {i}) \quad \forall w ^ {*} = \underset {w} {\arg \min } \mathbb {E} \| \Phi w - y \| ^ {2}. \tag {11}
187
+ $$
188
+
189
+ The assumption on the independence of model errors is crucial in the proof of this result: families of models with large and complementary systematic biases may not exhibit this behaviour. We observe in Section 4 that the conditions of Proposition 1 are approximately satisfied in a variety of model comparison problems, and running SGD on a linear combination of Bayesian models still leads to solutions that approximate Bayesian model selection. We conjecture that analogous phenomena occur during training within a neural network. The proof of Proposition 3.5 depends on the observation that, given a collection of features, the best least-squares predictor will assign the greatest weight to the feature that best predicts the training data. While neural networks are not linear ensembles of fixed models, we conjecture that, especially for later layers of the network, a similar phenomenon will occur wherein weights from nodes that are more predictive of the target values over the course of training will be assigned higher magnitudes. We empirically investigate this hypothesis in Section 4.2
190
+
191
+ # 4 Empirical Evaluation
192
+
193
+ Section 3 focused on two key ideas: that training statistics can be used as an estimator for a Bayesian model's marginal likelihood (or a lower bound thereof), and that gradient descent on a linear ensemble implicitly arrives at the same ranking as this estimator in the infinite-sample, infinite-training-time limit. We further conjectured that similar phenomena may also hold for deep neural networks. We now illustrate these ideas in a range of settings. Section 4.1 provides confirmation and quantification of our results for linear models, the model class for which we have theoretical guarantees, while Section 4.2 provides preliminary empirical confirmation that the mechanisms at work in linear models also appear in DNNs.
194
+
195
+ ![](images/0d8f69b72400a4c130f7cfbcaf0bf2759c59cdb6faf7d77ea0b5540afd437e91.jpg)
196
+ Figure 1: Left: ranking according to $\log P(\mathcal{D})$ , $\mathcal{L}(\mathcal{D})$ with exact posterior samples, and $\mathcal{L}(\mathcal{D})$ computed on samples generated by gradient descent. Right: gap between true marginal likelihood and $\mathcal{L}_k(\mathcal{D})$ estimator shrinks as a function of $k$ for both exact and gradient descent-generated samples.
197
+
198
+ ![](images/42876c70da6e9b348a24901b315c36eeeb4b9e3073a7f9708a29fb853dc5c960.jpg)
199
+
200
+ # 4.1 Bayesian Model Selection
201
+
202
+ While we have shown that our estimators correspond to lower bounds on the marginal likelihood, we would also like the relative rankings of models given by our estimator to correlate with those assigned by the marginal likelihood. We evaluate this correlation in a variety of linear model selection problems. We consider three model selection problems; for space we focus on one, feature dimension selection, and provide full details and evaluations on the other two tasks in Appendix B.1
203
+
204
+ For the feature dimension selection task, we construct a synthetic dataset inspired by Wilson and Izmailov [46] of the form $(\mathbf{X},\mathbf{y})$ , where $x_{i} = (y_{i} + \epsilon_{1},y_{i} + \dots ,y_{i} + \epsilon_{15},\epsilon_{16},\dots ,\epsilon_{30})$ , and consider a set of models $\{\mathcal{M}_k\}$ with feature embeddings $\phi_k(x_i) = x_i[1,\ldots ,k]$ . The optimal model in this setting is the one which uses exactly the set of 'informative' features $x[1,\dots ,15]$ .
205
+
206
+ We first evaluate the relative rankings given by the true marginal likelihood with those given by our estimators. We compare $\mathcal{L}_S$ , $\mathcal{L}$ and $\mathcal{L}_k$ ; we first observe that all methods agree on the optimal model: this is a consistent finding across all of the model selection tasks we considered. While all methods lower bound the log marginal likelihood, $\mathcal{L}_k(\mathcal{D})$ and $\mathcal{L}_S(\mathcal{D})$ exhibit a reduced gap compared to the naive lower bound. In the rightmost plot of Figure 1, we further quantify the reduction in the bias of the estimator $\mathcal{L}_k(\mathcal{D})$ described in Section 3. We use exact posterior samples (which we denote in the figure simply as posterior samples) and approximate posterior samples generated by the gradient descent procedure outlined in Algorithm 1 using a fixed step size and thus inducing some approximation error. We find that both sampling procedures exhibit decreasing bias as the number of samples $k$ is increased, with the exact sampling procedure exhibiting a slightly smaller gap than the approximate sampling procedure.
207
+
208
+ We next empirically evaluate the claims of Proposition [3.5] in settings with relaxed assumptions. We compare the ranking given by the true log marginal likelihood, the estimated $\mathcal{L}(\mathcal{D})$ , and the weight assigned to each model by the trained linear regressor. We consider three variations on how sampled predictions from each model are drawn to generate the features $\phi_i$ : sampling the prediction for point $\hat{Y}_i$ from $P(\hat{Y}_i|\mathcal{D}_{< i})$ ('concurrent sampling' - this is the setting of Proposition [3.5], as well as two baselines: the posterior $P(\hat{Y}_i|\mathcal{D})$ ('posterior sampling'), and the prior $P(\hat{Y}_i)$ ('prior sampling'). We find that the rankings of the marginal likelihood, its lower bound, and of the ranking given by concurrent optimization all agree on the best model in all three of the model selection problems outlined previously, while the prior and posterior sampling procedure baselines do not exhibit a consistent ranking with the log ML. We visualize these results for the feature dimension selection problem in Figure 2; full results are shown in Figure 5.
209
+
210
+ We further illustrate how the $\mathcal{L}(\mathcal{D})$ estimator can select inductive biases in the infinite-width neural network regime in Figure 2. Here we evaluate the relative change in the log ML of a Gaussian Process induced by a fully-connected MLP (MLP-NTK-GP) and a convolutional neural network (Conv-NTK-GP) which performs regression on the MNIST dataset. The fully-connected model sees a consistent decrease in its log ML with each additional data point added to the dataset, whereas the convolutional model sees the incremental change in its log ML become less negative as more data points are added as a result of its implicit bias, as well as a much higher incremental change in its log ML from the start of training. This leads to the Conv-NTK-GP having a higher value for $\mathcal{L}(\mathcal{D})$ than the MLP-NTK-GP. We provide an analogous plot evaluating $\log P(\mathcal{D})$ in the appendix.
211
+
212
+ ![](images/1c8b69fd77da955648c2f952e0e436bc1d690ff6390fe84630ed968c8915795c.jpg)
213
+ Figure 2: Left: Relative rankings given by optimize-then-prune, ML, and estimated $\mathcal{L}(\mathcal{D})$ on the feature selection problem. Right: visualizing the interpretation of $\mathcal{L}(\mathcal{D})$ as the 'area under the curve' of training losses: we plot the relative change in the estimator $\mathcal{L}(\mathcal{D}_{\leq i}) - \mathcal{L}(\mathcal{D}_{< i})$ for convolutional and fully-connected NTK-GP models, and shade their area.
214
+
215
+ ![](images/e1f788dc78be42bb1e7f8758e10ab017f740ab5d14eb132190125c8943f1741b.jpg)
216
+
217
+ ![](images/48b7317b0b37828c7e50e1d0dd1b917e3d2fac616f8d5c7e416620730e596fdd.jpg)
218
+ Figure 3: Linear combinations of DNNs on FashionMNIST trained. Left: ensemble weights versus the test loss for concurrent training. Middle: sum over training losses (SOTL), standardized by the number of training samples, versus test loss for parallel training. Right: training curves for the different models trained in parallel. All results are averaged over 10 runs, and standard deviations are shown by the shaded regions around each observation. The model parameters, given in the parentheses, are the number of layers $(l)$ , nodes per layer $(n)$ and kernel size $(k)$ , respectively.
219
+
220
+ ![](images/a32440fdf3bc3449a49426ece81ef10b8ef756526695307b5ab03a2a239f16ba.jpg)
221
+
222
+ ![](images/4101cfcd5ac689381b41f78e1254364d6b4005895b8484bb8da1b7c07b9b3d3a.jpg)
223
+
224
+ Model $(l,n,k)$
225
+
226
+ + MLP(2,200)
227
+ + MLP(2,100)
228
+ MLP(2,10)
229
+ + MLP(1,200)
230
+ + MLP(1, 100)
231
+ MLP(1,400)
232
+ CNN(2,200,5)
233
+ + CNN(2, 200, 3)
234
+ + CNN(2, 100, 5)
235
+ + CNN(2, 100, 3)
236
+
237
+ # 4.2 Training Speed, Ensemble Weight, and Generalization in DNNs
238
+
239
+ We now address our conjectures from Section 3, which aim to generalize our results for linear models to deep neural networks trained with SGD. Recall that our hypothesis involves translating iterative posterior samples to minibatch training losses over an SGD trajectory, and bayesian model evidence to generalization error; we conjectured that just as the sum of the log posterior likelihoods is useful for Bayesian model selection, the sum of minibatch training losses will be useful to predict generalization error. In this section, we evaluate whether this conjecture holds for a simple convolutional neural network trained on the FashionMNIST dataset. Our results provide preliminary evidence in support of this claim, and suggest that further work investigating this relationship may reveal valuable insights into how and why neural networks generalize.
240
+
241
+ # 4.2.1 Linear Combination of DNN Architectures
242
+
243
+ We first evaluate whether the sum over training losses (SOTL) obtained over an SGD trajectory correlates with a model's generalization error, and whether SOTL predicts the weight assigned to a model by a linear ensemble. To do so, we train a linear combination of DNNs with SGD to determine whether SGD upweights NNs that generalize better. Further details of the experiment can be found in Appendix B.2 Our results are summarized in Figure 3.
244
+
245
+ We observe a strong correlation between SOTL and average test cross-entropy (see Figure 3 middle column), validating that the SOTL is correlated with generalization. Further, we find that architectures with lower test error (when trained individually) are given higher weight by the linear ensembling layer - as can be seen from the left plot in Figure 3 This supports our hypothesis that SGD favours models that generalize well.
246
+
247
+ ![](images/6dc142291b4b55c16e9b137dde96298340c940b380eec2373b64a845b21bf4d7.jpg)
248
+ Figure 4: Weight assigned to subnetwork by SGD in a deep neural network (x-axis) versus the subnetwork performance (estimated by the sum of cross-entropy, on the y-axis) for different FashionMNIST classes. The light blue ovals denote depict $95\%$ confidence intervals, estimated over 10 seeds (i.e. $2\sigma$ for both the weight and SOTL). The orange line depicts the general trend.
249
+
250
+ # 4.2.2 Subnetwork Selection in Neural Networks
251
+
252
+ Finally, we evaluate whether our previous insights apply to submodels within a neural network, suggesting a potential mechanism which may bias SGD towards parameters with better generalization performance. Based on the previous experiments, we expect that nodes that have a lower sum over training errors (if evaluated as a classifier on their own) are favoured by gradient descent and therefore have a larger final weight than those which are less predictive of the data. If so, we can then view SGD followed by pruning (in the final linear layer of the network) as performing an approximation of a Bayesian model selection procedure. We replicate the model selection problem of the previous setting, but replace the individual models with the activations of the penultimate layer of a neural network, and replace the linear ensemble with the final linear layer of the network. Full details on the experimental set-up can be found in Appendix B.3. We find that our hypotheses hold here: SGD assigns larger weights to subnetworks that perform well, as can be seen in Figure 4. This suggests that SGD is biased towards functions that generalize well, even within a network. We find the same trend holds for CIFAR-10, which is shown in Appendix B.3.
253
+
254
+ # 5 Conclusion
255
+
256
+ In this paper, we have proposed a family of estimators of the marginal likelihood which illustrate the connection between training speed and Bayesian model selection. Because gradient descent can produce exact posterior samples in linear models, our result shows that Bayesian model selection can be done by training a linear model with gradient descent and tracking how quickly it learns. This approach also applies to the infinite-width limit of deep neural networks, whose dynamics resemble those of linear models. We further highlight a connection between magnitude-based pruning and model selection, showing that models for which our lower bound is high will be assigned more weight by an optimal linear model combination. This raises the question of whether similar mechanisms exist in finitely wide neural networks, which do not behave as linear models. We provide preliminary empirical evidence that the connections shown in linear models have predictive power towards explaining generalization and training dynamics in DNNs, suggesting a promising avenue for future work.
257
+
258
+ # 6 Broader Impact
259
+
260
+ Due to the theoretical nature of this paper, we do not foresee any immediate applications (positive or negative) that may arise from our work. However, improvement in our understanding of generalization in deep learning may lead to a host of downstream impacts which we outline briefly here for completeness, noting that the marginal effect of this paper on such broad societal and environmental impacts is likely to be very small.
261
+
262
+ 1. Safety and robustness. Developing a stronger theoretical understanding of generalization will plausibly lead to training procedures which improve the test-set performance of deep neural networks. Improving generalization performance is crucial to ensuring that deep learning systems applied in practice behave as expected based on their training performance.
263
+ 2. Training efficiency and environmental impacts. In principle, obtaining better estimates of model and sub-model performance could lead to more efficient training schemes, thus potentially reducing the carbon footprint of machine learning research.
264
+ 3. Bias and Fairness. The setting of our paper, like much of the related work on generalization, does not consider out-of-distribution inputs or training under constraints. If the training dataset is biased, then a method which improves the generalization performance of the model under the i.i.d. assumption will be prone to perpetuating this bias.
265
+
266
+ # Acknowledgements
267
+
268
+ Lisa Schut was supported by the Accenture Labs and Alan Turing Institute.
269
+
270
+ # References
271
+
272
+ [1] Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. arXiv preprint arXiv:1901.08584, 2019.
273
+ [2] D. Basu. On statistics independent of a complete sufficient statistic. Sankhya: The Indian Journal of Statistics (1933-1960), 15(4):377-380, 1955. ISSN 00364452. URL http://www.jstor.org/stable/25048259
274
+ [3] Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine learning and the bias-variance trade-off. arXiv preprint arXiv:1812.11118, 2018.
275
+ [4] David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. Journal of the American statistical Association, 112(518):859-877, 2017.
276
+ [5] Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In International Conference on Machine Learning, pages 1613-1622, 2015.
277
+ [6] Andreas Damianou and Neil Lawrence. Deep gaussian processes. volume 31 of Proceedings of Machine Learning Research, pages 207-215, Scottsdale, Arizona, USA, 29 Apr-01 May 2013. PMLR. URL http://proceedings.mlr.press/v31/damianou13a.html.
278
+ [7] Alexander G. de G. Matthews, Jiri Hron, Mark Rowland, Richard E. Turner, and Zoubin Ghahramani. Gaussian process behaviour in wide deep neural networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=H1-nGgWC-.
279
+ [8] Vincent Dutordoir, Mark van der Wilk, Artem Artemev, and James Hensman. Bayesian image classification with deep convolutional gaussian processes. volume 108 of Proceedings of Machine Learning Research, pages 1529-1539, Online, 26-28 Aug 2020. PMLR. URL http://proceedings.mlr.press/v108/dutordoir20a.html.
280
+ [9] David Duvenaud, Dougal Maclaurin, and Ryan Adams. Early stopping as nonparametric variational inference. In Artificial Intelligence and Statistics, pages 1070-1077, 2016.
281
+ [10] Gintare Karolina Dziugaite and Daniel M Roy. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. arXiv preprint arXiv:1703.11008, 2017.
282
+ [11] Gintare Karolina Dziugaite and Daniel M Roy. Data-dependent PAC-Bayes priors via differential privacy. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, NeurIPS 31, pages 8430-8441. 2018.
283
+ [12] Stanislav Fort, Paweł Krzysztof Nowak, Stanislaw Jastrzebski, and Srini Narayanan. Stiffness: A new perspective on generalization in neural networks. arXiv preprint arXiv:1901.09491, 2019.
284
+ [13] Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rJ1-b3RcF7
285
+ [14] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050-1059, 2016.
286
+ [15] Pascal Germain, Francis Bach, Alexandre Lacoste, and Simon Lacoste-Julien. PAC-Bayesian theory meets Bayesian inference. In Advances in Neural Information Processing Systems, pages 1884-1892, 2016.
287
+ [16] Alex Graves. Practical variational inference for neural networks. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 2348-2356. Curran Associates, Inc., 2011. URL http://papers.nips.cc/paper/4329-practical-variational-inference-for-neural-networks.pdf
288
+ [17] Moritz Hardt, Benjamin Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent, 2015.
289
+ [18] Bobby He, Balaji Lakshminarayanan, and Yee Whye Teh. Bayesian deep ensembles via the neural tangent kernel. arXiv preprint arXiv:2007.05864, 2020.
290
+
291
+ [19] Geoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the sixth annual conference on Computational learning theory, pages 5-13, 1993.
292
+ [20] Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in neural information processing systems, pages 8571-8580, 2018.
293
+ [21] Yiding Jiang, Behnam Neyshabur, Hossein Mobahi, Dilip Krishnan, and Samy Bengio. Fantastic generalization measures and where to find them, 2019.
294
+ [22] Dimitris Kalimeris, Gal Kaplun, Preetum Nakkiran, Benjamin Edelman, Tristan Yang, Boaz Barak, and Haofeng Zhang. Sgd on neural networks learns functions of increasing complexity. In Advances in Neural Information Processing Systems, pages 3491-3501, 2019.
295
+ [23] Mohammad Emtiyaz E Khan, Alexander Immer, Ehsan Abedi, and Maciej Korzepa. Approximate inference turns deep networks into gaussian processes. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d' Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 3094-3104. Curran Associates, Inc., 2019. URL http://papers.nips.cc/paper/8573-approximate-inference-turns-deep-networks-into-gaussian-processes.pdf.
296
+ [24] S. Kullback and R. A. Leibler. On information and sufficiency. Ann. Math. Statist., 22(1): 79-86, 03 1951. doi: 10.1214/aoms/1177729694. URL https://doi.org/10.1214/aoms/ 1177729694
297
+ [25] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in neural information processing systems, pages 6402-6413, 2017.
298
+ [26] Jaehoon Lee, Jascha Sohl-dickstein, Jeffrey Pennington, Roman Novak, Sam Schoenholz, and Yasaman Bahri. Deep neural networks as gaussian processes. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=B1EA-M-0Z
299
+ [27] David JC MacKay. Bayesian methods for adaptive models. PhD thesis, California Institute of Technology, 1992.
300
+ [28] David JC MacKay. Information theory, inference and learning algorithms. Cambridge university press, 2003.
301
+ [29] Wesley J Maddox, Pavel Izmailov, Timur Garipov, Dmitry P Vetrov, and Andrew Gordon Wilson. A simple baseline for bayesian uncertainty in deep learning. In Advances in Neural Information Processing Systems, pages 13132-13143, 2019.
302
+ [30] Stephan Mandt, Matthew D Hoffman, and David M Blei. Stochastic gradient descent as approximate bayesian inference. The Journal of Machine Learning Research, 18(1):4873-4907, 2017.
303
+ [31] Alexander G de G Matthews, Jiri Hron, Richard E Turner, and Zoubin Ghahramani. Samplethen-optimize posterior sampling for bayesian linear models. Neural Information Processing Systems, 2017.
304
+ [32] David A. McAllester. Some PAC-Bayesian Theorems. Machine Learning, 37(3):355-363, 1999.
305
+ [33] Vaishnavh Nagarajan and J. Zico Kolter. Uniform convergence may be unable to explain generalization in deep learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d' Alche-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 11615-11626. Curran Associates, Inc., 2019.
306
+ [34] Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep double descent: Where bigger models and more data hurt. arXiv preprint arXiv:1912.02292, 2019.
307
+ [35] Jeffrey Negrea, Mahdi Haghifam, Gintare Karolina Dziugaite, Ashish Khisti, and Daniel M Roy. Information-theoretic generalization bounds for sgld via data-dependent estimates. In Advances in Neural Information Processing Systems, pages 11015-11025, 2019.
308
+
309
+ [36] Ian Osband, John Aslanides, and Albin Cassirer. Randomized prior functions for deep reinforcement learning. In Advances in Neural Information Processing Systems, pages 8617-8629, 2018.
310
+ [37] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in neural information processing systems, pages 1177-1184, 2008.
311
+ [38] Carl Edward Rasmussen. Gaussian processes in machine learning. In Summer School on Machine Learning, pages 63-71. Springer, 2003.
312
+ [39] Carl Edward Rasmussen and Zoubin Ghahramani. Occam's razor. In Advances in neural information processing systems, pages 294-300, 2001.
313
+ [40] Binxin Ru, Clare Lyle, Lisa Schut, Mark van der Wilk, and Yarin Gal. Revisiting the train loss: an efficient performance estimator for neural architecture search, 2020.
314
+ [41] Samuel L Smith and Quoc V Le. A bayesian perspective on generalization and stochastic gradient descent. arXiv preprint arXiv:1710.06451, 2017.
315
+ [42] Samuel L. Smith and Quoc V. Le. A bayesian perspective on generalization and stochastic gradient descent. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=BBij4ygOZ
316
+ [43] Guillermo Valle-Pérez, Chico Q Camargo, and Ard A Louis. Deep learning generalizes because the parameter-function map is biased towards simple functions. arXiv preprint arXiv:1805.08522, 2018.
317
+ [44] M. van der Wilk, M. Bauer, S. John, and J. Hensman. Learning Invariances using the Marginal Likelihood. arXiv e-prints, August 2018. _eprint: 1808.05563.
318
+ [45] Max Welling and Yee W Teh. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 681-688, 2011.
319
+ [46] Andrew Gordon Wilson and Pavel Izmailov. Bayesian deep learning and a probabilistic perspective of generalization. arXiv preprint arXiv:2002.08791, 2020.
320
+ [47] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.
abayesianperspectiveontrainingspeedandmodelselection/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3eb1f5e9be6f2a2ed010724fca9c304802fea49d5750295acd262bf8f2293b35
3
+ size 257559
abayesianperspectiveontrainingspeedandmodelselection/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7644aa38f8407c1ccff4d8bfc0a3e8ee1bb7e348a937f1736f86ed60ab757a5c
3
+ size 426453
abenchmarkforsystematicgeneralizationingroundedlanguageunderstanding/66ecf06f-3cd5-4c86-b564-a5e5f9bf067a_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8de057060132a39d549588f83772c0a6bc1924d56ff3181d41abc87492d07102
3
+ size 73770
abenchmarkforsystematicgeneralizationingroundedlanguageunderstanding/66ecf06f-3cd5-4c86-b564-a5e5f9bf067a_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b5da4b3f0c72ad60c2badb043662133e28952dfd19fb10748a1426673721cce
3
+ size 88872
abenchmarkforsystematicgeneralizationingroundedlanguageunderstanding/66ecf06f-3cd5-4c86-b564-a5e5f9bf067a_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d68297444908b366cbd3d4ac983d98c9a0223dc6783f58ecca1d9b61c922a5e
3
+ size 1013784
abenchmarkforsystematicgeneralizationingroundedlanguageunderstanding/full.md ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Benchmark for Systematic Generalization in Grounded Language Understanding
2
+
3
+ Laura Ruis*
4
+
5
+ University of Amsterdam laura.ruis@student.uva.nl
6
+
7
+ Jacob Andreas
8
+
9
+ Massachusetts Institute of Technology jda@mit.edu
10
+
11
+ Marco Baroni ICREA
12
+
13
+ Facebook AI Research
14
+ mbaroni@fb.com
15
+
16
+ Diane Bouchacourt
17
+
18
+ Facebook AI Research dianeb@fb.com
19
+
20
+ Brenden M. Lake
21
+
22
+ New York University Facebook AI Research brenden@nyu.edu
23
+
24
+ # Abstract
25
+
26
+ Humans easily interpret expressions that describe unfamiliar situations composed from familiar parts ("greet the pink brontosaurus by the ferris wheel"). Modern neural networks, by contrast, struggle to interpret novel compositions. In this paper, we introduce a new benchmark, gSCAN, for evaluating compositional generalization in situated language understanding. Going beyond a related benchmark that focused on syntactic aspects of generalization, gSCAN defines a language grounded in the states of a grid world, facilitating novel evaluations of acquiring linguistically motivated rules. For example, agents must understand how adjectives such as 'small' are interpreted relative to the current world state or how adverbs such as 'cautiously' combine with new verbs. We test a strong multi-modal baseline model and a state-of-the-art compositional method finding that, in most cases, they fail dramatically when generalization requires systematic compositional rules.
27
+
28
+ # 1 Introduction
29
+
30
+ Human language is a fabulous tool for generalization. If you know the meaning of the word 'small', you can probably pick the 'small wampimuk' among larger ones, even if this is your first encounter with wampimuks. If you know how to 'walk cautiously,' you can infer how to 'bike cautiously' through a busy intersection (see example in Figure 1). The ability to learn new words from limited data and use them in a variety of contexts can be attributed to our aptness for systematic compositionality [9, 33]: the algebraic capacity to understand and produce potentially infinite combinations from known components. Modern deep neural networks, while strong in many domains [29], have not mastered comparable language-based generalization challenges, a fact conjectured to underlie their sample inefficiency and inflexibility [26, 25, 8]. Recent benchmarks have been proposed for language-based generalization in deep networks [20, 16, 17], but they do not specifically test for a model's ability to perform rule-based generalization, or do so only in limited contexts. Systematic, rule-based generalization is instead at the core of the recently introduced SCAN dataset [25] (see [19, 3] for related ideas). In a series of studies, Lake, Baroni and colleagues [5, 30, 10] tested various standard deep architectures for their ability to extract general composition rules supporting zero-shot interpretation of new composite linguistic expressions (can you tell what 'dax twice' means, if you know the meaning of 'dax' and 'run twice'?). In most cases, neural networks were unable to generalize correctly. Very recent work has shown that specific architectural or training-regime adaptations allow
31
+
32
+ ![](images/9c5c8ed9d487a49b8a53718e9d0ffe0e4554d7fc443174675a92c55bdf8df9f7.jpg)
33
+ Figure 1: gSCAN evaluates context sensitivity in situated language understanding. In these two simplified examples, the same determiner phrase 'the red small circle' has different referents and demands different action sequences. Being cautious means looking both ways ('L_turn R_turn R_turn L_turn') before each step.
34
+
35
+ ![](images/2017fb3e1854603ac03fa553d19c2fbe86fb5f705c9be130c9bf1eb22cde93ee.jpg)
36
+ Figure 2: Examples showing how to 'walk while spinning' and how to 'push.' On the left, the agent needs to spin around ('L_turn L_turn L_turn L_turn') before it moves by one grid cell. On the right, it needs to push a square all the way to the wall.
37
+
38
+ deep networks to handle at least some of the SCAN challenges [2, 27, 34, 36, 14]. However, it is unclear to what extent these proposals account for genuine compositional generalization, and to what extent they are 'overfitting' to the limitations of SCAN.
39
+
40
+ SCAN simulates a navigation environment through an interpretation function that associates linguistic commands ('walk left') to sequences of primitive actions ( $L_{\text{turn walk}}$ ). SCAN, however, is not grounded, in that it lacks a 'world' with respect to which commands are interpreted: instead the agent must simply associate linguistic strings with fixed sequences of action symbols, mapping syntactic strings (word sequences) to other syntactic strings (action label sequences). In real languages, by contrast, the process by which utterances are understood is both compositional and contextual: references to entities and descriptions of actions must be interpreted with respect to a particular state of the world. The interaction between compositionality and context introduces new types of generalization an intelligent agent might have to perform. For example, consider the meaning of size adjectives such as 'small' and 'large'. The determiner phrases 'the small bottle' and 'the large bottle' might refer to the same bottle, depending on the sizes of the surrounding bottles. We wonder whether this and related notions of compositional generalization can be addressed using existing techniques, but SCAN's context insensitivity makes it impossible to investigate broader notions of generalization.
41
+
42
+ We introduce grounded SCAN (gSCAN), a new benchmark that, like the original SCAN, focuses on rule-based generalization, but where meaning is grounded in states of a grid world accessible to the agent. This allows gSCAN to evaluate eight types of compositional generalization (mostly from a single training set), whereas most benchmarks focus on just one or two types. For example, Figure 1 illustrates context-sensitivity in compositionality: how the target referent 'the red small circle', and the action sequence required to navigate there, will change based on the state of the world. It also illustrates modification-based compositionality: once learning how to walk somewhere 'cautiously' (Figure 1 left), can models walk elsewhere cautiously or push an object cautiously (Figure 1 right)? On these and other generalizations, we test a baseline multi-modal model representative of contemporary deep neural architectures, as well as a recent method proposed to address compositional generalization in the original SCAN dataset (GECA, [2]). Across eight different generalization splits, the baseline dramatically fails on all but one split, and GECA does better on only one more split. These results demonstrate the challenges of accounting for common natural language generalization phenomena with standard neural models, and affirm gSCAN as a fruitful benchmark for developing models with more human-like compositional learning skills.
43
+
44
+ # 2 Related Work
45
+
46
+ Recent work has recognized the advantages of compositional generalization for robustness and sample efficiency, and responded by building synthetic environments to evaluate models on aspects of this skill [7, 21, 25, 30, 4, 16, 17, 8, 19, 3]. Several of these works also ground the semantics of language in a different modality. Bahdanau et al. [4] evaluate binary questions about object relations and probe unseen object combinations. Chevalier-Boisvert et al. [8] study curriculum learning in a grid world of navigation-related tasks. The authors show that for tasks with a compositional structure
47
+
48
+ agents generalize poorly and need large amounts of demonstrations. Crucially, the difference between these works and ours is that we rely on evidence that humans systematically generalize in language [9, 33, 28] and only test for linguistic, rule-based generalization. Hill et al. [17] also test for linguistic generalization and evaluate unseen combinations of verbs and objects in skill learning, showing the benefits of richer grounded environments. Our benchmark includes a similar test for verb-object binding, but adds challenges to more comprehensively cover the multi-faceted nature of systematic compositionality.
49
+
50
+ Lake and Baroni [25] proposed the SCAN benchmark for evaluating systematic generalization, which was distinguished by testing for linguistic generalization and learning abstract compositional rules (see also [7, 3, 22, 19]). SCAN concerns instructions generated by a phrase-structure grammar that can unambiguously be translated into action sequences by applying an interpretation function. The data is split into training and test sets that contain systematic differences. For example, models must interpret phrases that contain primitives only encountered in isolation at training time (e.g., inferring that the command 'jump twice' translates to actions jump jump when you know that 'walk twice' translates to walk walk and 'jump' translates to jump). SCAN however lacks grounding, which severely limits the variety of linguistic generalizations it can examine.
51
+
52
+ In addition to adding grounding, gSCAN was developed in ways that render previous SCAN-based methods either inapplicable or unsuccessful. Gordon et al. [14] formalize the compositional skills in SCAN as equivariance to a certain group of permutations. Their model is hard-coded to be equivariant to all permutations of SCAN's verb primitives and succeeds on some of the tasks. However, the method only tackles local permutations in the input command (e.g., swapping 'walk' for 'jump') that result in local permutations in the action sequence (swapping walk for jump). More realistically, in gSCAN, permuting words can result in referencing different objects, different interactions, or different manners of moving. Permutations in the instructions modify the actions in a non-local manner, in their terminology, a problem that would also affect the recent syntax-semantics separation approach [36]. Another new approach uses meta-learning to succeed on some SCAN splits [27]. This model 'learns how to learn' new primitives in meta-training episodes where words are randomly mapped to meanings. But it is unclear how such episodes should be designed for gSCAN, since there are no random mapping to exploit between primitives and action symbols. Thus, these new methods [14, 36, 27] are inapplicable to gSCAN, at least in their current forms, and may require highly non-trivial extensions to attempt the benchmark.
53
+
54
+ Good-enough compositional data augmentation (GECA) [2] is a model-agnostic method that obtains good results on SCAN and is applicable to gSCAN. GECA identifies sentence fragments which appear in similar environments and uses those to generate more training examples. For instance, when one infers that 'the cat sang', 'the wug sang' and 'the cat danced' are high probability training sentences, then 'the wug danced' is also probable, but 'the sang danced' is not. The assumption here is that 'cat' and 'wug' are interchangeable, and GECA indeed helps in such cases. As our results will show, this assumption is unreliable in more realistic, grounded language understanding.
55
+
56
+ # 3 The Grounded SCAN Benchmark
57
+
58
+ We aim to test a broad set of phenomena in situated language understanding where humans should easily generalize, but where we expect computational models to struggle due to the systematicity of the differences between train and test. For a learner agent, the goal is to process a synthetic language command combined with a world state and produce a sequence of target actions that correctly execute the input command in the world state (see Figure 1 and 2), which can be treated as a multi-modal sequence-to-sequence supervised learning task. In this section we describe what tools we work with to design the tests, and in Section 5 we describe in detail how each linguistic phenomenon is tested. The code to generate the benchmark and the data used in the experiments are both publicly available.[2]
59
+
60
+ Instructions. Grounded SCAN (gSCAN) asks agents to execute instructions in a 2D grid world with objects. We build on the formal approach of SCAN [25] while evaluating a much wider range of linguistic generalizations by grounding the semantics of the input instructions. The world model allows us to examine how often an agent needs to see 'move cautiously' before applying 'cautiously' in a novel scenario, whether an agent can identify a novel object by reasoning about its relation to other objects, and whether an agent can infer how to interact with objects by identifying abstract
61
+
62
+ object properties. Figure 2 shows two example commands and corresponding action sequences, which are simplified but representative of gSCAN (actual gSCAN examples use a larger grid and more objects). On the left, the agent must generate the target actions that lead to the circle 'while spinning.' All adverbial modifiers such as 'cautiously' or 'while spinning' require applying complex, context-sensitive transformations to the target sequence, going beyond the simple substitutions and concatenations representative of SCAN. On the right (Figure 2), the agent must push a small square, where pushing requires moving something as far as possible without hitting the wall (in this case), or another object. The agent can also 'pull' objects, in which case it would pull the object back as far as possible. The full phrase-structure grammar for instructions is provided in Appendix A.
63
+
64
+ World model. Each instruction is paired with a relevant world state, presented to the agent as a tensor $\mathbf{X}_s\in \mathbb{R}^{d\times d\times c}$ for grid size $d$ ( $d = 6$ or 12 depending on split) $^3$ . The object at each grid cell is defined via one-hot encodings along three property types, namely color $\mathcal{C} = \{\mathrm{red},\mathrm{green},\mathrm{blue},\mathrm{yellow}\}$ , shape $\mathcal{S} = \{\mathrm{circle},\mathrm{square},\mathrm{cylinder}\}$ , and size $\mathcal{D} = \{1,2,3,4\}$ . Specifying the agent location and heading requires five more channels, and thus the tensorwidth is $c = 5 + |\mathcal{C}| + |\mathcal{S}| + |\mathcal{D}|$ . As for the outputs, agents produce strings composed of action symbols $\{\text{walk},\text{push},\text{pull},\text{stay},L_{-}\text{turn},R_{-}\text{turn}\}$ .
65
+
66
+ To ensure that only relevant world states are combined with an instruction, each instruction imposes several constraints on combined world states. For instance, each target referent from the instruction determiner phrase is ensured to be unique (only one possible target in "walk to the yellow square"). Moreover, to conform to natural language pragmatics, if a size modifier is used there is always a relevant distractor. For example, when the target referent is 'the small square', we additionally place a square that is larger (Figure 2 right). Appendix B details how objects are placed in the world.
67
+
68
+ Further, objects of size 1 and 2 are assigned the latent class light, and objects of size 3 and 4 are heavy. This division determines how the agent should interact with the object. If an object is light, it needs to be pushed once to move it to the next cell, executed by the action command 'push' (similarly for 'pull'). If an object is heavy, it needs to be pushed twice to move to the next cell ('push push').
69
+
70
+ Data splits. Equipped with this framework, we design splits with systematic differences between training and test. We distinguish two broad types of tests, compositional generalization and length generalization. To facilitate more rapid progress and lower compute requirements, we designed the eight systematic splits to require training just two models—one for compositional and one for length generalization—as multiple tests use the same training set. For both broad types of tests we also examine a 'random split' with no systematic differences between training and test to ensure that the agents are able to execute commands to a high accuracy when the they require no systematic compositionality (Section 5A).
71
+
72
+ 'Compositional generalization' evaluates combining known concepts into novel meaning (detailed in Section 5B-H). From a single training set we can evaluate how an agent handles a range of systematic generalizations, including novel object property combinations ('red square'; Section 5B,C), novel directions (a target to the south west; 5D), novel contextual references ('small yellow circle'; 5E), and novel adverbs ('pull cautiously'; 5G,H). For example, we model an analogue of the 'wampimuk' case from the introduction by holding out all examples where a circle of size 2 is referred to as 'the small circle' (Section 5E). We test whether models can successfully pick out the small circle among larger ones, even though that particular circle is referred to during training only as 'the circle' (with no other circles present) or 'the large circle' (with only smaller circles present). The shared training set across splits has more than $300\mathrm{k}$ demonstrations of instructions and their action sequences, and each test instruction evaluates just one systematic difference. For more details on the number of examples in the training and test sets of the experiments, refer to Appendix C. To substantiate the fairness of the tests we also discuss the number and combinations of concepts available during training [13] (see individual splits in Section 5).
73
+
74
+ 'Length generalization' (Section 5I) evaluates a persistent problem with sequence generation models: generalizing beyond the lengths seen during training [15]. Since length generalization is entirely unsolved for SCAN, even for methods that made progress on the other splits [25, 5, 36, 14], we also separately generate a split to evaluate generalization to longer action sequences. We do this by using a larger grid size than in the compositional generalization splits $(d = 12)$ , and hold out all examples with target sequences of length $m > 15$ . During training, a model sees all the possible instructions (see Appendix C), but they require fewer actions than at test time (up to $m = 47$ ).
75
+
76
+ Table 1: Results for each split, showing exact match accuracy (average of 3 runs ± std. dev.). Models fail on all splits except A, C, and F.
77
+
78
+ <table><tr><td></td><td colspan="2">Exact Match (%)</td></tr><tr><td>Split</td><td>Baseline</td><td>GECA</td></tr><tr><td>A: Random</td><td>97.69 ± 0.22</td><td>87.6 ± 1.19</td></tr><tr><td>B: Yellow squares</td><td>54.96 ± 39.39</td><td>34.92 ± 39.30</td></tr><tr><td>C: Red squares</td><td>23.51 ± 21.82</td><td>78.77 ± 6.63</td></tr><tr><td>D: Novel direction</td><td>0.00 ± 0.00</td><td>0.00 ± 0.00</td></tr><tr><td>E: Relativity</td><td>35.02 ± 2.35</td><td>33.19 ± 3.69</td></tr><tr><td>F: Class inference</td><td>92.52 ± 6.75</td><td>85.99 ± 0.85</td></tr><tr><td>G: Adverb k = 1</td><td>0.00 ± 0.00</td><td>0.00 ± 0.00</td></tr><tr><td>Adverb k = 5</td><td>0.47 ± 0.14</td><td>-</td></tr><tr><td>Adverb k = 10</td><td>2.04 ± 0.95</td><td>-</td></tr><tr><td>Adverb k = 50</td><td>4.63 ± 2.08</td><td>-</td></tr><tr><td>H: Adverb to verb</td><td>22.70 ± 4.59</td><td>11.83 ± 0.31</td></tr><tr><td>I: Length</td><td>2.10 ± 0.05</td><td>-</td></tr></table>
79
+
80
+ ![](images/213556486d2f956de762aee2a23a67e49d3b8eb9779b5d5377b0e354075d04fc.jpg)
81
+ Figure 3: Baseline network. The command encoder is a biLSTM ( $f_c$ ; top left), and the world state encoder is a CNN ( $f_s$ ; top right). A LSTM decoder jointly attends over $f_c$ and $f_s$ to produce action sequences (bottom). 'SOS' is the start-of-sequence token that kicks off generation.
82
+
83
+ # 4Baselines
84
+
85
+ Models are trained using supervised learning to map instructions to action sequences, given a world context. We train a multi-modal neural baseline to generate action sequences, conditioned on the input commands and world state (Figure 3). The architecture is not new and uses standard machinery, e.g., [32], but we explain the key components below for completeness (details in Appendix D).
86
+
87
+ The baseline is a sequence-to-sequence (seq2seq) [38] model fused with a visual encoder. It uses a recurrent 'command encoder' to process the instructions ("walk to the circle" in Figure 3) and a 'state encoder' to process the grid world. A recurrent decoder generates an action sequence (e.g., walk) through joint attention over the command steps and grid cells. The input tuple $\mathbf{x} = (\mathbf{x}^c, \mathbf{X}^s)$ includes the command sequence $\mathbf{x}^c = \{x_1^c, \ldots, x_n^c\}$ and the world state $\mathbf{X}^s \in \mathbb{R}^{d \times d \times c}$ , for a $d \times d$ grid. The target sequence $\mathbf{y} = \{y_1, \ldots, y_m\}$ is modeled as $p_\theta(\mathbf{y} \mid \mathbf{x}) = \prod_{j=1}^m p_\theta(y_j \mid \mathbf{x}, y_1, \ldots, y_{j-1})$ .
88
+
89
+ Command encoder. The network processes the instruction with a bidirectional LSTM [18, 37] denoted $\mathbf{h}^c = f_c(\mathbf{x}^c)$ (Figure 3). It produces $\mathbf{h}^c = \{h_1^c,\dots,h_n^c\}$ with a vector for each of the $n$ words.
90
+
91
+ State encoder. The network perceives the initial world state through a convolutional network (CNN; Figure 3) denoted $\mathbf{H}^s = f_s(\mathbf{X}^s)$ , with three kernel sizes [40]. It produces a grid-based representation of the world state $\mathbf{H}^s \in \mathbb{R}^{d \times d \times 3c_{\mathrm{out}}}$ with $c_{\mathrm{out}}$ as the number of feature maps per kernel size.
92
+
93
+ Decoder. The output decoder $f_{d}$ models the action sequences given the decoder messages, $p(\mathbf{y}|\mathbf{h}^{c},\mathbf{H}^{s})$ . At each step, the previous output $y_{j - 1}$ is embedded $\mathbf{e}_j^d\in \mathbb{R}_e^d$ , leading to $\mathbf{h}_j^d =$ LSTM $([\mathbf{e}_j^d;\mathbf{c}_j^c;\mathbf{c}_j^s],\mathbf{h}_{j - 1}^d)$ . Context vectors $\mathbf{c}_j^c$ and $\mathbf{c}_j^s$ use double attention [11]. First, the command context is $\mathbf{c}_j^c =$ Attention $(\mathbf{h}_{j - 1}^d,\mathbf{h}^c)$ , attending over the input steps and producing a weighted average of $\mathbf{h}^c$ (Appendix D for definition). Second, conditioning on $\mathbf{c}_j^c$ , the state context is $\mathbf{c}_j^s =$ Attention $([\mathbf{c}_j^c;\mathbf{h}_{j - 1}^d],\mathbf{H}^s)$ , attending over grid locations and producing a weighted average of $\mathbf{H}^s$ . The action emission $y_{j}$ is then $p(y_{j}\mid \mathbf{x},y_{1},\ldots ,y_{j - 1}) = \mathrm{softmax}(\mathbf{W}_{o}[\mathbf{e}_{j}^{d};\mathbf{h}_{j}^{d};\mathbf{c}_{j}^{c};\mathbf{c}_{j}^{s}])$ .
94
+
95
+ Training. Training optimizes cross-entropy using Adam with default parameters [23]. Supervision is provided by ground-truth target sequences, with the convention of traveling horizontally first and then vertically (either is okay at test). The learning rate starts at 0.001 and decays by 0.9 every 20,000 steps. We train for 200,000 steps with batch size 200. The best model was chosen based on a small development set of 2,000 examples (full details in Appendix E). The most important parameters were the kernel sizes, chosen as 1, 5, and 7 for 6x6 states and 1, 5, and 13 for 12x12 states.
96
+
97
+ Good-enough compositional data augmentation. A GECA-enhanced model was run on gSCAN with similar parameters to [1]. The context windows are full sentences, with a gap size of one and a maximum of two gaps (Appendix E for details). GECA receives an input sequence consisting of the natural language command concatenated with an output sequence containing a linearized
98
+
99
+ ![](images/b53caaeccf747472322b1b556eebe55b5357891852307e02eaa416e58b3ebab8.jpg)
100
+ "Walk to the big square." "Walk to the small square "Push the small square.
101
+
102
+ ![](images/07266169e74397cc5a3e7fb19d3c8f91d357b106ad39f842a0a2cde963edd5a4.jpg)
103
+ "Walk to the big yellow square." "Pull the small yellow square."
104
+
105
+ ![](images/c77357238d32971affe557cc73a96d9fdae73864fd8d516a293a7cbe9e2d65d7.jpg)
106
+ "Walk to the blue square." "Walk to the red circle." "Push the red circle."
107
+
108
+ ![](images/17c4b7d0e4716a59756e15d849d0f72e8a26ccae8cabc6f0e2ecd11efa75fa86.jpg)
109
+ "Walk to the red square." "Push the red square." "I'm not sure," he said.
110
+ Figure 4: Generalizing from calling an object "big square" to calling it 'big yellow square' (left), and from 'red' and 'square' to 'red square' (right).
111
+
112
+ ![](images/bd68341dcd30a95a798e5653a1fb488aeb07a7f56586d8a5225a9f63f247db24.jpg)
113
+
114
+ ![](images/1165a585a05e6a547323b82f0cd2f843fc7a8bec5e8bd9a0a26ba9c9c7dda366.jpg)
115
+
116
+ ![](images/f18d45bccff98445f2f6d7536a40a83f53baf9662f2be827c7fa0f3f99510130.jpg)
117
+ "Walk to the big circ "Walk to the yellow big circle."
118
+ Walk to the small circle. "Walk to the yellow small circle."
119
+ Figure 5: Generalizing from calling an object "big" to calling it "small."
120
+
121
+ ![](images/826bb264451b37d23a0a44188fbfeceb003e6ce6c6b658a1a2557d248c4db045.jpg)
122
+
123
+ ![](images/7cb88e275621858a493f61758db3647b37bc055f25eefb63153dfecf1d540a66.jpg)
124
+
125
+ ![](images/f35231fa352c30bb66e4cbff9625af46ed320346d8831df0d9d1b7bc7f7de5eb.jpg)
126
+ "Pull the square."
127
+ square." "Push the square."
128
+ Figure 6: Generalizing from pulling to pushing a heavy square.
129
+
130
+ representation of the target object's feature vector. After generating augmented sequences, we reapply these to the gSCAN dataset by modifying training examples with augmentable input sentences. Modification leaves action sequences unchanged while changing the commands and environment features. For example, command "walk to a red circle" could become "walk to a red square", and the world state would analogously replace the target red circle with a red square.
131
+
132
+ # 5 Experiments
133
+
134
+ Our main contribution is the design of test sets that require different forms of linguistic generalization. We trained the models from the previous section on the two training sets described in Section 3. A summary of results is shown in Table 1, with each split detailed below. All experiment and model code is available so our results can be reproduced and built upon.
135
+
136
+ A: Random split. The random split verifies that the models can learn to follow gSCAN commands when there are no systematic differences between training and test. The training set covers all instructions that appear in this split, each coupled with more than 170 unique world states (Appendix C for more details), meaning that test examples only differ in the world states that are combined with the instruction (e.g., the training set might instruct the agent to "walk to the small red circle" with a target red circle in row 2 and column 3, and at test time the agent needs to generalize to walking to the small red circle in row 1 column 4). The baseline model achieves near perfect exact match accuracy $(97.69\% \pm 0.22$ mean over 3 runs, reported with standard deviation) on the 19,282 test examples, where exact match means that the entire action sequence is produced correctly. GECA performs worse $(87.6\% \pm 1.19)$ , which is unsurprising since the assumption underlying the data augmentation procedure in GECA is that phrases that appear in similar environments can be permuted. This is not always correct in gSCAN (none of the verbs and adverbs can be permuted).
137
+
138
+ B, C: Novel composition of object properties. Here we examine a well-studied but still challenging type of compositionality [20, 16, 12]: whether a model can learn to recombine familiar colors and shapes to recognize a novel color-shape combination (see Figure 4 for examples). Given that objects are clearly distinguished from the referring expressions denoting them (a red square of size 2 may have referents 'square', 'red square', 'small red square', or 'large red square'), we consider two separate setups, one involving composition of references and another involving composition of attributes.
139
+
140
+ For a first split, we hold out all data examples where a yellow square (of any size) is the target object and is referred to with the determiner phrases 'the yellow square', 'the small yellow square' or 'the big yellow square' (i.e., any phrase containing the color adjective and the shape). The training set contains examples with yellow squares as a target, but they are always referred to without a color: 'the square', 'the big square', or 'the small square', meaning the methods cannot
141
+
142
+ Table 2: Exact match broken down by referred target (i.e., the target object denoted in the determiner phrase of the instruction). The $\star$ column indicates chance performance of choosing an object uniformly at random and correctly navigating to it.
143
+
144
+ <table><tr><td>Referred Target</td><td>★</td><td>Baseline</td><td>GECA</td></tr><tr><td>‘small red square’</td><td>8.33</td><td>13.09 ± 14.07</td><td>78.64 ± 1.10</td></tr><tr><td>‘big red square’</td><td>8.33</td><td>11.03 ± 10.29</td><td>77.88 ± 0.95</td></tr><tr><td>‘red square’</td><td>16.67</td><td>8.48 ± 0.90</td><td>97.95 ± 0.14</td></tr><tr><td>‘small square’</td><td>8.33</td><td>27.13 ± 41.38</td><td>66.26 ± 13.60</td></tr><tr><td>‘big square’</td><td>8.33</td><td>22.96 ± 32.20</td><td>68.09 ± 20.90</td></tr><tr><td>‘square’</td><td>50</td><td>52.92 ± 36.81</td><td>95.09 ± 7.42</td></tr></table>
145
+
146
+ ground that target to the reference 'yellow square' (Figure 4 left). At test time, the model needs to zero-shot generalize to the yellow square being referred to by its color. The second split never sees examples where a red square is the target in training, meaning the methods, in addition to never encountering the determiner phrase 'the red square', cannot ground a reference to this object (Figure 4 right). However, the red square is familiar since it appears often as a non-target background object. There is ample opportunity to learn these color-shape combinations during training: yellow squares are the target reference 16,725 times, and 'square' and 'red' appear in many other contexts.
147
+
148
+ The baseline shows poor performance on the 'red squares'-split that requires zero-shot generalization to the target red square $(23.51\% \pm 21.82$ ; again exact match with standard deviation over 3 runs). GECA does substantially better on this split $(78.77\% \pm 6.63)$ . This is precisely what GECA is designed for; permuting 'red circle' and 'yellow square' during training gives familiarity with 'red square'. Surprisingly, GECA does not improve over the baseline for the 'yellow square'-split $(34.92\% \pm 39.30$ for GECA and $54.96\% \pm 39.39$ for baseline). In this split, yellow squares have been seen during training as a target object, yet never referred to using their color: seeing ('small,big') square' but never ('small,big) yellow square' in the commands. We hypothesize that models overfit this pattern, and while GECA should help by generating instructions including 'yellow', their number is still too small compared to the ones of the form ('small,big) square'. In Appendix C we take a closer look at the differences in the training evidence seen by the baseline model and GECA.
149
+
150
+ When analyzing accuracy per referred target for the 'red squares'-split, we can reason about what happens (Table 2). The baseline model never sees the referred target 'red square' (except as a background object) and is unable to compose a meaningful representation of this object, confirming past work [24, 30]. It has seen plenty evidence for other red objects (circles and cylinders) and other colored squares (blue or green). Higher performance when only the shape is explicitly denoted (e.g. "walk to the square", when the square happens to be red) is expected because there are at most 2 objects in the world when only shape is denoted (chance is $50\%$ ). GECA does well when 'the red square' or 'the square' is used, but is thrown off when a size adjective is mentioned. Again, it seems these adjectives are too strongly grounded to the objects that do appear during training.
151
+
152
+ Last, we examine if the errors come from target identification or sequence generation. In the random split (A), the agent attends maximally (averaged over steps) to the target object in $94\%$ of episodes. In contrast, in the 'yellow square' - and 'red square'-split, the agent does so in only $0.08\%$ and $47\%$ of error episodes, respectively, showing clear failures of compositional target identification.
153
+
154
+ D: Novel direction. In the next experiment we examine generalizing to navigation in a novel direction. We hold out all examples from the training set where the target object is located to the south-west of the agent. The agent can train on walking in any other direction, and needs to generalize to walking to the west and then the south, or vice-versa. Conceptually, at test time the agent needs to combine the familiar notions of walking to the south and west. Results are in the row 'novel direction' of Table 1. Both methods obtain 0 exact matches over all runs (0% correct). A closer analysis of the predictions (see Figure 2 Appendix F for 2 examples) shows that the agent usually walks all the way west (or south) and then fails to turn to the target object. The attention shows that the agent knows where to go (by attending to the correct grid cell), just not how to get there. Even though there is catastrophic failure at the task overall, the agent learned by the baseline model ends up in the correct row or column of the target $63.10\% \pm 3.90$ of the times, and the agent learned with GECA $58.85\% \pm 3.45$ of the times. This further substantiates that they often walk all the way west, but then fail to travel the distance left to the south, or vice-versa. The results on this split indicate that apparently the methods completely fail to generate target sequences that have either three occurrences of $L_{-}$ turn (needed to walk to the west and then south for an agent that starts facing east) or two occurrences of $R_{-}$ turn (needed to walk to the south and then west) spread over the target sequence.
155
+
156
+ E: Novel contextual references. In natural language, many words can only be grounded to relative concepts. Which object one refers to when saying 'the small circle' depends on the other circles in the world state. We investigate whether a model can grasp relativity in language by considering a scenario where objects of a specific size (size 2) are never targets correctly picked by the 'small' modifier during training (see Figure 5). At test time, the target is a circle of size 2, which is being correctly referred to as a 'small circle' (the determiner phrase may also mention color). In other words, we hold out for testing all world states where the circle of size 2 is the target and the smallest circle in the world, paired with an instruction containing the word 'small'. The agent can ground size 2 circles to references like 'green circle' or 'big circle' (there are 29,966 such training examples), but
157
+
158
+ needs to generalize to that same circle being referred to as 'the small circle' at test time. To do this correctly the agent cannot simply memorize how 'small' is used in training, but needs to understand that it's meaning is relative with respect to the world state.
159
+
160
+ Both methods are again substantially worse than on the random split: $35.2\% \pm 2.35$ for the baseline and $33.19\% \pm 3.69$ for GECA. When breaking down the exact match per referred target it seems like the model is exploiting the fact that when in addition to the size modifier 'small' the color of the circle is specified, it can randomly choose between 2 circles of the specified color, as opposed to randomly choosing between any circle in the world. When generating world states for instructions containing some combination of a color, size modifier and shape in the determiner phrase (e.g., 'the small red circle') during data generation we always generate 2 differently sized objects of each color-shape pair. So when you recognize the color and shape in the instruction, you have a $50\%$ chance of picking the right object. Then how to interact with it if necessary is already familiar. We observe that for data examples where the instruction specifies the color of the target in addition to the size the baseline achieves $53\%.00 \pm 1.36$ and GECA achieves $47.51\% \pm 12.59$ , suggesting the agents randomly select a circle of the specified color. As in splits (B) and (C), the model only maximally attends to the correct target in $4\%$ of errors. Thus the obtained performance indicates a complete failure of genuinely understanding 'small' and picking a small circle from among larger ones in arbitrary circumstances.
161
+
162
+ F: Novel composition of actions and arguments. Another important phenomenon in natural language is categorizing words into classes whose entries share semantic properties [35]. We study the simple case of nominal class inference, establishing two categories of nouns, that, depending on their weight (they can be light or heavy), will lead to a different interpretation of the verb taking them as patient arguments. Recall from Section 3 that pushing or pulling a heavy object over the same distance (i.e., grid cells) as a light object requires twice as many target actions of 'push' or 'pull'.
163
+
164
+ This split examines inferences about latent object class and how to correctly interact with objects, as shown in Figure 6. We hold out all examples where the verb in the instruction is 'push', and the target object is a square of size 3, meaning it is in the heavy class and needs to be pushed twice to move by one grid cell. A model should infer that this square of size 3 is 'heavy' from its extensive training experience 'pulling' this object, each time needing two actions to move it (shown through 7,656 training trials). Adding to this experience, all circles and cylinders of this size also needs to be pushed twice (appearing 32,738 times). Note that Hill et al. [17] similarly studies verb-noun binding.
165
+
166
+ Both methods perform similarly to their accuracy on the random split, namely $92.52\% \pm 6.75$ and $85.99\% \pm 0.85$ by the baseline and GECA respectively (examples with 'push' in the random split were at $96.64\% \pm 0.52$ and for GECA $86.72\% \pm 1.23$ ), and seem to be able to correctly categorize the square of size 3 in the class heavy and interact with it accordingly. This is consistent with the findings of Hill et al. [17] with regards to generalizing familiar actions to new objects.
167
+
168
+ G, H: Novel adverbs. In the penultimate experiment we look at transforming target sequences in response to how adverbs modify command verbs. The adverbs all require the agent to do something (i.e., generate a particular action sequence) at some predefined interval (see Figures 1 and 2 for examples). To do something cautiously means looking both ways before crossing grid lines, to do something while spinning requires spinning around after moving to a grid cell, to do something hesitantly makes the agent stay put after each step (with the action stay), and finally to do something while zigzagging only applies to moving diagonally on the grid. Where normally the agent would first travel horizontally all the way and then vertically, when doing something while zigzagging the agent will alternate between moving vertically and horizontally every grid cell.
169
+
170
+ We design two experiments with adverbs. The first examines learning the adverb 'cautiously' from just one or a few examples and using it in different world states (few-shot learning). For instance, the agent sees examples of instructions with the adverb 'cautiously' during training, and needs to generalize to all possible instructions with that adverb (discarding those with longer target sequences than seen during training). To give the models a fair chance, we examine learning from one to fifty examples. The second experiment examines whether a model can generalize a familiar adverb to a familiar verb, namely 'while spinning' to 'pull'. In this experiment the agent sees ample evidence of both the tested adverb (66,229 examples) and the verb (68,700 examples), but has never encountered them together during training. These experiments are related to the 'around right'-split introduced in SCAN by Loula et al. [30], but in this case, the grounded meaning of each adverb in the world state changes its effect on the target sequence. Therefore we expect methods like the equivariance permutations of Gordon et al. [14], but also GECA, to have no impact on generalization.
171
+
172
+ During few-shot learning, the models fail catastrophically when learning 'cautiously' from just one demonstration (0% correct; exact match again). We experiment with increasing the number $(k)$ of 'cautiously' examples during training (Table 1), but find it only marginally improves the baseline's abysmal performance. Even with $k = 50$ , performance is only $4.6\%$ correct, emphasizing the challenges of acquiring abstract concepts from limited examples. For combining spinning and pulling, both methods also struggle ( $22.70\% \pm 4.59$ for baseline and $11.83\% \pm 0.31$ for GECA). In each case, performance drops as a function of target length (Figures 3 and 4 Appendix F).
173
+
174
+ I: Novel action sequence lengths. For this split we only train the baseline model, as data augmentation will not produce longer training examples. When trained on actions sequences of length $\leq 15$ , the baseline performs well on held-out examples below this length ( $94.98\% \pm 0.1$ ) but degrades for longer sequences ( $2.10\% \pm 0.05$ overall; $19.32\% \pm 0.02$ for length 16; $1.71\% \pm 0.38$ for length 17; below $1\%$ for length $\geq 18$ ). Unsurprisingly, as with SCAN, baselines struggle with this task.
175
+
176
+ # 6 Conclusion
177
+
178
+ The SCAN benchmark has catalyzed new methods for compositional learning [2, 27, 14, 36]. Our results, on a new gSCAN (grounded SCAN) benchmark, suggest these methods largely exploit artifacts in SCAN that are not central to the nature of compositional generalization. gSCAN removes these artifacts by introducing more sophisticated semantics through grounding. We trained strong multi-modal and GECA baselines, finding that both methods fail on the vast majority of splits. Both methods succeed only on inferring object class and using it for interaction (similarly to [17]), while GECA improves only on novel composition of object properties ('red squares'). The complete failure on the subsequent splits show advances are needed in neural architectures for compositional learning. Progress on gSCAN may come from continuing the lines of work that have made progress on SCAN. Meta-learning [27] or equivariance permutation [14] could support compositional generalization, if the types of generalizations examined here can be captured in a meta-training procedure or equivariance definition. For now, at least, applying these methods to gSCAN requires highly non-trivial extensions.
179
+
180
+ In future work, we plan to extend gSCAN to support reinforcement learning (RL), although certain splits are not straightforward to translate ('pushing', 'spinning', etc.) The benchmark seems demanding enough with supervision, without adding RL's sample complexity issues (e.g., RL seems unlikely to improve few-shot learning or target identification, but it may help with length). gSCAN could use RGB images instead of partially-symbolic state representations, although again it should only add difficulty. We expect progress on gSCAN to translate to more naturalistic settings, e.g., [31, 39, 22, 6], since the issues studied in gSCAN feature prominently in realistic NLU tasks such as teaching autonomous agents by demonstration. Nevertheless, we can't know for sure how progress will extend to other tasks until new approaches emerge for tackling gSCAN.
181
+
182
+ # Broader Impact
183
+
184
+ Systematic generalization characterizes human language and thought, but it remains a challenge for modern AI systems. The gSCAN benchmark is designed to stimulate further research on this topic. Advances in machine systematic generalization could facilitate improvements in learning efficiency, robustness, and human-computer interaction. We do not anticipate that the broader impacts would selectively benefit some groups at the expense of others.
185
+
186
+ # Acknowledgments and Disclosure of Funding
187
+
188
+ We are grateful to Adina Williams and Ev Fedorenko for very helpful discussions, to João Loula who did important initial work to explore compositional learning in a grid world, to Robin Vaaler for comments on an earlier version of this paper, and to Esther Vecht for important design advice and support. Through B. Lake's position at NYU, this research was partially funded by NSF Award 1922658 NRT-HDR: FUTURE Foundations, Translation, and Responsibility for Data Science.
189
+
190
+ # References
191
+
192
+ [1] Andreas, J. (2019). Measuring compositionality in representation learning. In Proceedings of ICLR, New Orleans, LA. Published online: https://openreview.net/group?id=ICLR.cc/2019/conference.
193
+ [2] Andreas, J. (2020). Good-enough compositional data augmentation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7556-7566, Online. Association for Computational Linguistics.
194
+ [3] Bahdanau, D., Beaudoin, P., and Courville, A. (2019a). CLOSURE: Assessing Systematic Generalization of CLEVR Models. arXiv preprint.
195
+ [4] Bahdanau, D., Murty, S., Noukhovitch, M., Nguyen, T. H., de Vries, H., and Courville, A. (2019b). Systematic generalization: What is required and can it be learned? International Conference on Learning Representations (ICLR), pages 1-16.
196
+ [5] Bastings, J., Baroni, M., Weston, J., Cho, K., and Kiela, D. (2018). Jump to better conclusions: SCAN both left and right. In Proceedings of the EMNLP BlackboxNLP Workshop, pages 47-55, Brussels, Belgium.
197
+ [6] Bisk, Y., Yuret, D., and Marcu, D. (2016). Natural language communication with robots. NACCL: Human Language Technologies, pages 751-761.
198
+ [7] Bowman, S. R., Potts, C., and Manning, C. D. (2015). Recursive Neural Networks Can Learn Logical Semantics. Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality (CVSC), Beijing, China, July 26-31, 2015, pages 12-21.
199
+ [8] Chevalier-Boisvert, M., Bahdanau, D., Lahlou, S., Willems, L., Sahara, C., Nguyen, T. H., and Bengio, Y. (2019). BabyAI: A platform to study the sample efficiency of grounded language learning. In Proceedings of ICLR, New Orleans, LA. Published online: https://openreview.net/group?id=ICLR.cc/2019/conference.
200
+ [9] Chomsky, N. (1957). Syntactic Structures. Mouton, Berlin, Germany.
201
+ [10] Dessi, R. and Baroni, M. (2019). CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks. In Proceedings of ACL, Firenze, Italy. In press.
202
+ [11] Devlin, J., Uesato, J., Bhupatiraju, S., Singh, R., Mohamed, A. R., and Kohli, P. (2017). RobustFill: Neural program learning under Noisy I/O. International Conference on Machine Learning (ICML), 3:1641-1658.
203
+ [12] Eslami, S. M. A., Rezende, D. J., Besse, F., Viola, F., Morcos, A. S., Garnelo, M., Ruderman, A., Rusu, A. A., Danihelka, I., Gregor, K., Reichert, D. P., Buesing, L., Weber, T., Vinyals, O., Rosenbaum, D., Rabinowitz, N., King, H., Hillier, C., Botvinick, M., Wierstra, D., Kavukcuoglu, K., and Hassabis, D. (2018). Neural scene representation and rendering. Science, 1210(June):1204-1210.
204
+ [13] Geiger, A., Cases, I., Karttunen, L., and Potts, C. (2019). Posing fair generalization tasks for natural language inference. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4485-4495, Hong Kong, China. Association for Computational Linguistics.
205
+ [14] Gordon, J., Lopez-Paz, D., Baroni, M., and Bouchacourt, D. (2020). Permutation equivariant models for compositional generalization in language. In International Conference on Learning Representations (ICLR).
206
+ [15] Graves, A., Wayne, G., and Danihelka, I. (2014). Neural Turing Machines. arXiv preprint.
207
+ [16] Hill, F., Hermann, K. M., Blunsom, P., and Clark, S. (2017). Understanding grounded language learning agents. arXiv preprint, pages 1-13.
208
+
209
+ [17] Hill, F., Lampinen, A., Schneider, R., Clark, S., Botvinick, M., McClelland, J. L., and Santoro, A. (2020). Environmental drivers of systematicity and generalisation in a situated agent. In International Conference on Learning Representations (ICLR).
210
+ [18] Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8):1735-1780.
211
+ [19] Hupkes, D., Dankers, V., Mul, M., and Bruni, E. (2019). The compositionality of neural networks: integrating symbolism and connectionism.
212
+ [20] Johnson, J., Hariharan, B., van der Maaten, L., Fei-Fei, L., Zitnick, L., and Girshick, R. (2017a). CLEVR: a diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of CVPR, pages 1988–1997, Honolulu, HI.
213
+ [21] Johnson, J., Hariharan, B., van der Maaten, L., Hoffman, J., Fei-fei, L., Zitnick, C. L., and Girshick, R. (2017b). Inferring and Executing Programs for Visual Reasoning. In International Conference on Computer Vision.
214
+ [22] Keysers, D., Scharli, N., Scales, N., Buisman, H., Furrer, D., Kashubin, S., Momchev, N., Sinopalnikov, D., Stafiniak, L., Tihon, T., Tsarkov, D., Wang, X., van Zee, M., and Bousquet, O. (2019). Measuring Compositional Generalization: A Comprehensive Method on Realistic Data. In International Conference on Learning Representations (ICLR).
215
+ [23] Kingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization. In Bengio, Y. and LeCun, Y., editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
216
+ [24] Kuhnle, A. and Copestake, A. A. (2017). Shapeworld - A new test methodology for multimodal language understanding. CoRR, abs/1704.04517.
217
+ [25] Lake, B. and Baroni, M. (2018). Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In Proceedings of ICML, pages 2879-2888, Stockholm, Sweden.
218
+ [26] Lake, B., Ullman, T., Tenenbaum, J., and Gershman, S. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40:1-72.
219
+ [27] Lake, B. M. (2019). Compositional generalization through meta sequence-to-sequence learning. In Advances in Neural Information Processing Systems.
220
+ [28] Lake, B. M., Linzen, T., and Baroni, M. (2019). Human few-shot learning of compositional instructions. In Proceedings of the 41st Annual Conference of the Cognitive Science Society.
221
+ [29] LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature, 521:436-444.
222
+ [30] Loula, J., Baroni, M., and Lake, B. (2018). Rearranging the familiar: Testing compositional generalization in recurrent networks. In Proceedings of the EMNLP BlackboxNLP Workshop, pages 108-114, Brussels, Belgium.
223
+ [31] Matuszek, C., FitzGerald, N., Zettlemoyer, L., Liefeng, B., and Fox, D. (2012). A Joint Model of Language and Perception for Grounded Attribute Learning. Proceedings of the 29th International Conference on Machine Learning, pages 1671-1678.
224
+ [32] Mei, H., Bansal, M., and Walter, M. (2016). Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In Proceedings of AAAI, pages 2772-2778, Phoenix, AZ.
225
+ [33] Montague, R. (1970). Universal Grammar. Theoria, 36:373-398.
226
+ [34] Nye, M. I., Solar-Lezama, A., Tenenbaum, J. B., and Lake, B. M. (2020). Learning Compositional Rules via Neural Program Synthesis. arXiv preprint.
227
+ [35] Pustejovsky, J. (1991). The generative lexicon. Computational Linguistics, 17(4):409-441.
228
+
229
+ [36] Russian, J., Jo, J., O'Reilly, R. C., and Bengio, Y. (2019). Compositional generalization in a deep seq2seq model by separating syntax and semantics. CoRR, abs/1904.09708.
230
+ [37] Schuster, M. and Paliwal, K. (1997). Bidirectional recurrent neural networks. Trans. Sig. Proc., 45(11):2673-2681.
231
+ [38] Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N. D., and Weinberger, K. Q., editors, Advances in Neural Information Processing Systems 27, pages 3104-3112. Curran Associates, Inc.
232
+ [39] Talmor, A. and Berant, J. (2018). The Web as a Knowledge-base for Answering Complex Questions. In *NAACL*.
233
+ [40] Wang, Z. and Lake, B. M. (2019). Modeling question asking using neural program generation. arXiv preprint, pages 1-14.
abenchmarkforsystematicgeneralizationingroundedlanguageunderstanding/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b591c5ed267a1db43505511d652a88147049911d856aaa28b6ef1ec9304e168
3
+ size 167701
abenchmarkforsystematicgeneralizationingroundedlanguageunderstanding/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7acbc4d1ea26aa59258c43715861d0605733f10ac5bd189522ca6940484bbf84
3
+ size 333729
abiologicallyplausibleneuralnetworkforslowfeatureanalysis/5481c27e-1703-4944-80a4-c7b08ceeb4a0_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4f9e8af9315da0d6b02bb8fcf56596fdbada3f7f282e04b9bcd76a731625f6f
3
+ size 74322
abiologicallyplausibleneuralnetworkforslowfeatureanalysis/5481c27e-1703-4944-80a4-c7b08ceeb4a0_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51ecb0d0db7adcd881842bba872c96b73b25b0e57bbe2769be6f98cfe30542f6
3
+ size 89999
abiologicallyplausibleneuralnetworkforslowfeatureanalysis/5481c27e-1703-4944-80a4-c7b08ceeb4a0_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25d7da4c7650700a534017b94ef9395588198a10b6bf599d1b0f30957bb9a308
3
+ size 1352697
abiologicallyplausibleneuralnetworkforslowfeatureanalysis/full.md ADDED
@@ -0,0 +1,378 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A biologically plausible neural network for Slow Feature Analysis
2
+
3
+ David Lipshutz*1
4
+
5
+ Charlie Windolf\*1,2
6
+
7
+ Siavash Golkar
8
+
9
+ Dmitri B. Chklovskii<sup>1,3</sup>
10
+
11
+ <sup>1</sup> Center for Computational Neuroscience, Flatiron Institute
12
+
13
+ $^{2}$ Department of Statistics, Columbia University
14
+
15
+ <sup>3</sup> Neuroscience Institute, NYU Medical Center
16
+
17
+ {dlipshutz,sgolkar,dchklovskii}@flatironinstitute.org
18
+
19
+ c.windolf@columbia.edu
20
+
21
+ # Abstract
22
+
23
+ Learning latent features from time series data is an important problem in both machine learning and brain function. One approach, called Slow Feature Analysis (SFA), leverages the slowness of many salient features relative to the rapidly varying input signals. Furthermore, when trained on naturalistic stimuli, SFA reproduces interesting properties of cells in the primary visual cortex and hippocampus, suggesting that the brain uses temporal slowness as a computational principle for learning latent features. However, despite the potential relevance of SFA for modeling brain function, there is currently no SFA algorithm with a biologically plausible neural network implementation, by which we mean an algorithm operates in the online setting and can be mapped onto a neural network with local synaptic updates. In this work, starting from an SFA objective, we derive an SFA algorithm, called Bio-SFA, with a biologically plausible neural network implementation. We validate Bio-SFA on naturalistic stimuli.
24
+
25
+ # 1 Introduction
26
+
27
+ Unsupervised learning of meaningful latent features from noisy, high-dimensional data is a fundamental problem for both machine learning and brain function. Often, the relevant features in an environment (e.g., objects) vary on relatively slow timescales when compared to noisy sensory data (e.g., the light intensity measured by a single receptor in the retina). Therefore, temporal slowness has been proposed as a computational principle for extracting relevant latent features [8, 19, 31].
28
+
29
+ A popular approach for extracting slow features, introduced by Wiskott and Sejnowski [31], is Slow Feature Analysis (SFA). SFA is an unsupervised learning algorithm that extracts the slowest projection, in terms of discrete time derivative, from a nonlinear expansion of the input signal. When trained on natural image sequences, SFA extracts features that resemble response properties of complex cells in early visual processing [2]. Impressively, hierarchical networks of SFA trained on simulated rat visual streams learn representations of position and orientation similar to representations encoded in the hippocampus [9].
30
+
31
+ The relevance of SFA is strengthened by its close relationship to information theoretic objectives and its equivalence to other successful algorithms under certain assumptions. When the time series is reversible and Gaussian, (Linear) SFA is equivalent to maximizing mutual information between the current output of the system and the next input [7, 5]. Moreover, features extracted by several
32
+
33
+ ![](images/732a1b201f9edbb44572a5f1ee0d89c0ccb5806336783e931a47e575ad1260cc.jpg)
34
+ Figure 1: A biologically plausible neural network implementation of Bio-SFA. The figure on the left depicts the architecture of the neural network. Blue circles are the input neurons and black circles are the output neurons with separate dendritic and somatic compartments. Lines with circles connecting the neurons denote synapses. Filled (resp. empty) circles denote non-Hebbian (resp. anti-Hebbian) synapses.
35
+
36
+ <table><tr><td>Variable</td><td>Biological interpretation</td></tr><tr><td>xt</td><td>expanded signal</td></tr><tr><td>W</td><td>feedforward synaptic weights</td></tr><tr><td>at := Wxt</td><td>dendritic current</td></tr><tr><td>M</td><td>lateral synaptic weights</td></tr><tr><td>yt</td><td>output signal</td></tr><tr><td colspan="2">Neural dynamics &amp; plasticity rules</td></tr><tr><td colspan="2">dyt(γ)/dγ = at - Myt(γ)</td></tr><tr><td colspan="2">ΔW = 2η((yt + yt-1)(xt + xt-1)T - atxtT)</td></tr><tr><td colspan="2">ΔM = ητ((yt + yt-1)(yt + yt-1)T - M)</td></tr></table>
37
+
38
+ algorithms favoring predictability from real-world datasets are similar to those extracted by SFA [29]. Finally, (Linear) SFA is equivalent to a time-lagged independent components analysis [3, 10], which is a popular statistical technique used to analyze molecular dynamics [22, 20, 26, 27].
39
+
40
+ Due to its success in modeling aspects of neural processing, deriving an algorithm for SFA with a biologically plausible neural network implementation is an important task. For the purposes of this work, we define biologically plausible to mean that the neural network operates in the online setting (i.e., after receiving an input, it computes its output before receiving its next input, never storing a significant fraction of past inputs), and its synaptic learning rules are local (i.e., a synaptic weight update depends only on variables represented in the pre- and postsynaptic neurons). In addition to satisfying basic properties of neural circuits, these online and locality requirements can lead to networks that are well-suited for analyzing large datasets because they operate in the online setting with low computational overhead.
41
+
42
+ While there are a few online algorithms for SFA, none have biologically plausible neural network implementations that extract multiple slow features. Moreover, there are no neural network implementations for the related information theoretic algorithms discussed above [29, 5]. Kompella et al. propose Incremental SFA [14] (see [16, 32] for extensions). However, this approach relies on non-local learning rules, so it does not meet the above criteria for biological plausibility. Malik et al. [17] use an online generalized eigenvalue problem solver [33] to derive an online algorithm for SFA. While their algorithm for finding one-dimensional projections can be implemented in a biologically plausible network, their extension to multi-dimensional projections is not fully online.
43
+
44
+ In this work, we propose Bio-SFA: an online algorithm for SFA with a biologically plausible neural network implementation, Fig. 1. We adopt a normative approach to derive our algorithm. First, we express the solution of the SFA problem in terms of an objective from classical multidimensional scaling. We then manipulate the objective to arrive at a min-max optimization problem that can be solved in the online setting by taking stochastic gradient descent-ascent steps. These steps can be expressed in terms of neural activities and updates to synaptic weight matrices, which leads to a natural interpretation of our online algorithm as a biologically plausible neural network. To validate our approach, we test our algorithm on datasets of naturalistic stimuli and reproduce results originally performed in the offline setting.
45
+
46
+ The synaptic updates of the feedforward weights $\mathbf{W}$ in our network are similar, although not identical, to the updates proposed heuristically by Földiák [8] to extract slow temporal features. However, there is no theoretical analysis of the algorithm in [8]. In contrast, in our normative approach, Bio-SFA is derived directly from an SFA objective, so we can analytically predict its output, as well as the synaptic weights, without resorting to numerical simulation. In addition, the comparison of our learning rules with Földiák's illuminates the relationship of [8] to SFA.
47
+
48
+ # 2 Slow Feature Analysis
49
+
50
+ Here and below, vectors are boldface lowercase letters (e.g., v), and matrices are boldface uppercase letters (e.g., M). We use superscripts to denote the components of a vector (e.g., $v^i$ ).
51
+
52
+ # 2.1 Problem statement
53
+
54
+ Wiskott and Sejnowski [31] proposed the following 2 step method for extracting slow features from a noisy data set: (1) generate a nonlinear expansion of the input signal, and (2) find the slowest, in terms of discrete time derivative, low-dimensional projection of the expanded signal. In this section, we review these 2 steps.
55
+
56
+ Let $\{\mathbf{s}_0,\mathbf{s}_1,\dots ,\mathbf{s}_T\}$ be a $d$ -dimensional input signal. The first step of SFA is to generate an $m$ dimensional expansion $\{\mathbf{x}_t\}$ , referred to as the expanded signal, of $\{\mathbf{s}_t\}$ . Let $\mathbf{h} = (h^{1},\ldots ,h^{m}):$ $\mathbb{R}^d\to \mathbb{R}^m$ be an expansion function and define
57
+
58
+ $$
59
+ \mathbf {x} _ {t} := \mathbf {h} (\mathbf {s} _ {t}) - \frac {1}{T} \sum_ {t ^ {\prime} = 1} ^ {T} \mathbf {h} (\mathbf {s} _ {t ^ {\prime}}), \qquad t = 0, 1, \ldots , T,
60
+ $$
61
+
62
+ so that $\{\mathbf{x}_t\}$ is centered.
63
+
64
+ Let $k < m$ . The second step of SFA is to find the $k$ -dimensional linear projection $\{\mathbf{y}_t\}$ of the expanded signal $\{\mathbf{x}_t\}$ that minimizes the mean discrete-time derivative of the output signal $\{\mathbf{y}_t\}$ , subject to a whitening constraint. To be precise, the objective can be formulated as follows:
65
+
66
+ $$
67
+ \underset {\{\mathbf {y} _ {t} \}} {\arg \min } \frac {1}{T} \sum_ {t = 1} ^ {T} \| \dot {\mathbf {y}} _ {t} \| ^ {2} \quad \text {s u b j e c t} \quad \frac {1}{T} \sum_ {t = 1} ^ {T} \mathbf {y} _ {t} \mathbf {y} _ {t} ^ {\top} = \mathbf {I} _ {k}, \tag {1}
68
+ $$
69
+
70
+ where $\dot{\mathbf{y}}_t$ is the discrete time derive of $\mathbf{y}_t$ , and $\mathbf{y}_t$ is a linear projection of $\mathbf{x}_t$ ; that is,
71
+
72
+ $$
73
+ \dot {\mathbf {y}} _ {t} := \mathbf {y} _ {t} - \mathbf {y} _ {t - 1}, \quad t = 1, \dots , T, \tag {2}
74
+ $$
75
+
76
+ $$
77
+ \mathbf {y} _ {t} := \mathbf {V} ^ {\top} \mathbf {x} _ {t}, \quad t = 0, 1, \dots , T, \quad \text {f o r s o m e} \mathbf {V} \in \mathbb {R} ^ {m \times k}. \tag {3}
78
+ $$
79
+
80
+ Note, since $\{\mathbf{x}_t\}$ is centered, the projection $\{\mathbf{y}_t\}$ is also centered.
81
+
82
+ # 2.2 Quadratic SFA
83
+
84
+ The focus of this work is to derive a biologically plausible neural network that learns to output the optimal output signal $\{\mathbf{y}_t\}$ when streamed the expanded signal $\{\mathbf{x}_t\}$ . While our algorithm does not depend on the specific choice of the expansion function $\mathbf{h}$ , for concreteness, we provide an example here.
85
+
86
+ In their original paper, Wiskott and Sejnowski [31] proposed setting the components of the function $\mathbf{h}:\mathbb{R}^d\to \mathbb{R}^m$ to be the monomials of degree one and two. This choice, which we refer to as "Quadratic SFA", has been widely used in applications [31, 2, 9, 34]. In particular, let $m\coloneqq d + d(d + 1) / 2$ and $h^1,\ldots ,h^m:\mathbb{R}^d\to \mathbb{R}$ denote the $m$ possible linear and quadratic functions of the form
87
+
88
+ $$
89
+ h (\mathbf {s}) := s ^ {i} \qquad \text {o r} \qquad h (\mathbf {s}) := s ^ {i} s ^ {j},
90
+ $$
91
+
92
+ for $1 \leq i \leq j \leq d$ . (When only the linear features are used, i.e., $x^i = s^i$ + const, this is referred to "Linear SFA".) Thus, each component of the output signal is a quadratic polynomial in the components of the signal of the form:
93
+
94
+ $$
95
+ y ^ {i} = V _ {1 i} h ^ {1} (\mathbf {s}) + \dots + V _ {m i} h ^ {m} (\mathbf {s}) + \text {c o n s t .} \tag {4}
96
+ $$
97
+
98
+ Biologically, there are a number of mechanisms that have been proposed for computing products of the form $s^i s^j$ ; see, e.g., [13] and the references therein. One such mechanism uses "Sigma-Pi" units [23], which multiplies two inputs via gating and have been invoked in cortical modeling [18].
99
+
100
+ In Sec. 6, we perform our numerical experiments using the quadratic expansion.
101
+
102
+ # 3 A novel SFA objective from classical multidimensional scaling
103
+
104
+ To derive an SFA network, we identify an objective function whose optimization leads to an online algorithm that can be implemented in a biologically plausible network. To identify the objective function, we first rewrite the SFA output as a principal subspace projection and then take advantage of the fact that principal subspace projections can be expressed as solutions of objectives from classical multidimensional scaling [6]. This approach is similar to the derivation of a biologically plausible neural network for canonical correlation analysis [15].
105
+
106
+ To begin, we define the discrete derivative process $\{\dot{\mathbf{x}}_t\}$ and the delayed sum process $\{\bar{\mathbf{x}}_t\}$ by $\dot{\mathbf{x}}_t \coloneqq \mathbf{x}_t - \mathbf{x}_{t-1}$ and $\bar{\mathbf{x}}_t \coloneqq \mathbf{x}_t + \mathbf{x}_{t-1}$ , for $t = 1, \dots, T$ . In addition, we define the sample covariance matrices
107
+
108
+ $$
109
+ \mathbf {C} _ {x x} := \frac {1}{T} \sum_ {t = 1} ^ {T} \mathbf {x} _ {t} \mathbf {x} _ {t} ^ {\top}, \quad \mathbf {C} _ {\dot {x} \dot {x}} := \frac {1}{T} \sum_ {t = 1} ^ {T} \dot {\mathbf {x}} _ {t} \dot {\mathbf {x}} _ {t} ^ {\top}, \quad \mathbf {C} _ {\bar {x} \bar {x}} := \frac {1}{T} \sum_ {t = 1} ^ {T} \bar {\mathbf {x}} _ {t} \bar {\mathbf {x}} _ {t} ^ {\top}. \tag {5}
110
+ $$
111
+
112
+ Substituting the definitions in Eqs. (2), (3) and (5) into the objective in Eq. (1), we can equivalently write the SFA problem as the following constrained minimization problem of the projection matrix $\mathbf{V}$ :
113
+
114
+ $$
115
+ \underset {\mathbf {V} \in \mathbb {R} ^ {m \times k}} {\arg \min } \operatorname {T r} \mathbf {V} ^ {\top} \mathbf {C} _ {\dot {x} \dot {x}} \mathbf {V} \quad \text {s u b j e c t t o} \quad \mathbf {V} ^ {\top} \mathbf {C} _ {x x} \mathbf {V} = \mathbf {I} _ {k}. \tag {6}
116
+ $$
117
+
118
+ Due to the whitening constraint in Eq. (6), we can equivalently write it as the maximization of the one-step autocorrelation of the projection $\{\mathbf{y}_t\}$ (see Appendix A for details):
119
+
120
+ $$
121
+ \underset {\mathbf {V} \in \mathbb {R} ^ {m \times k}} {\arg \max } \operatorname {T r} \mathbf {V} ^ {\top} \mathbf {C} _ {\bar {x} \bar {x}} \mathbf {V} \quad \text {s u b j e c t t o} \quad \mathbf {V} ^ {\top} \mathbf {C} _ {x x} \mathbf {V} = \mathbf {I} _ {k}. \tag {7}
122
+ $$
123
+
124
+ Next, setting $\hat{\mathbf{x}}_t\coloneqq \mathbf{C}_{xx}^{-1 / 2}\bar{\mathbf{x}}_t$ for $t = 1,\ldots ,T$ , and
125
+
126
+ $$
127
+ \hat {\mathbf {V}} := \mathbf {C} _ {x x} ^ {1 / 2} \mathbf {V}, \qquad \qquad \mathbf {C} _ {\hat {x} \hat {x}} := \frac {1}{T} \sum_ {t = 1} ^ {T} \hat {\mathbf {x}} _ {t} \hat {\mathbf {x}} _ {t} ^ {\top} = \mathbf {C} _ {x x} ^ {- 1 / 2} \mathbf {C} _ {\bar {x} \bar {x}} \mathbf {C} _ {x x} ^ {- 1 / 2},
128
+ $$
129
+
130
+ we see that $\mathbf{V}$ is a solution of Eq. (7) if and only if $\hat{\mathbf{V}}$ is the solution of:
131
+
132
+ $$
133
+ \underset {\hat {\mathbf {V}} \in \mathbb {R} ^ {m \times k}} {\arg \max } \operatorname {T r} \hat {\mathbf {V}} ^ {\top} \mathbf {C} _ {\hat {x} \hat {x}} \hat {\mathbf {V}} \quad \text {s u b j e c t t o} \quad \hat {\mathbf {V}} ^ {\top} \hat {\mathbf {V}} = \mathbf {I} _ {k}. \tag {8}
134
+ $$
135
+
136
+ Notably, Eq. (8) is the variance maximization objective for the PCA eigenproblem, which is optimized when the column vectors of $\hat{\mathbf{V}}$ span the $k$ -dimensional principal subspace of $\mathbf{C}_{\hat{x}\hat{x}}$ .
137
+
138
+ Finally, we take advantage of the fact that principal subspace projections can be expressed as solutions of objectives from classical multidimensional scaling [6, 21]. To this end, define the data matrices
139
+
140
+ $$
141
+ \bar {\mathbf {X}} := \left[ \bar {\mathbf {x}} _ {t}, \dots , \bar {\mathbf {x}} _ {T} \right], \quad \quad \quad \hat {\mathbf {X}} := \left[ \hat {\mathbf {x}} _ {1}, \dots , \hat {\mathbf {x}} _ {T} \right], \quad \quad \quad \bar {\mathbf {Y}} := \left[ \bar {\mathbf {y}} _ {1}, \dots , \bar {\mathbf {y}} _ {T} \right].
142
+ $$
143
+
144
+ Then, since $\bar{\mathbf{y}}_t = \mathbf{V}^\top \bar{\mathbf{x}}_t = \hat{\mathbf{V}}^\top \hat{\mathbf{x}}_t$ , we see that $\bar{\mathbf{Y}}$ is the projection of $\hat{\mathbf{X}}_t$ onto its $k$ -dimensional principal subspace. As shown in [6], this principal projection can be expressed as a solution of the following objective from classical multidimensional scaling:
145
+
146
+ $$
147
+ \underset {\bar {\mathbf {Y}} \in \mathbb {R} ^ {k \times T}} {\arg \min } \frac {1}{2 T ^ {2}} \left\| \bar {\mathbf {Y}} ^ {\top} \bar {\mathbf {Y}} - \hat {\mathbf {X}} ^ {\top} \hat {\mathbf {X}} \right\| _ {\text {F r o b}} ^ {2} = \underset {\bar {\mathbf {Y}} \in \mathbb {R} ^ {k \times T}} {\arg \min } \frac {1}{2 T ^ {2}} \left\| \bar {\mathbf {Y}} ^ {\top} \bar {\mathbf {Y}} - \bar {\mathbf {X}} ^ {\top} \mathbf {C} _ {x x} ^ {- 1} \bar {\mathbf {X}} \right\| _ {\text {F r o b}} ^ {2}. \tag {9}
148
+ $$
149
+
150
+ This objective minimizes the difference between the similarity of consecutive sums of output pairs, $\bar{\mathbf{y}}_t^\top \bar{\mathbf{y}}_{t'}$ , and the similarity of consecutive sums of whitened input pairs, $\hat{\mathbf{x}}_t^\top \hat{\mathbf{x}}_{t'}$ , where similarity is measured in terms of inner products. Here we have assumed that $\mathbf{C}_{xx}$ is full rank. If $\mathbf{C}_{xx}$ is not full rank (but is at least rank $k$ ), we can replace $\mathbf{C}_{xx}^{-1}$ in Eq. (9) with the Moore-Penrose inverse $\mathbf{C}_{xx}^{+}$ (see Appendix A).
151
+
152
+ # 4 Derivation of an online algorithm
153
+
154
+ While the objective (9) can be minimized by taking gradient descent steps in $\overline{\mathbf{Y}}$ , this does not lead to an online algorithm because the gradient steps require combining inputs from different time steps. Instead, we rewrite the objective as a min-max problem that can be solved by taking gradient descent-ascent steps that correspond to neural activities and synaptic update rules.
155
+
156
+ # 4.1 A min-max formulation
157
+
158
+ Expanding the square in Eq. (9) and dropping terms that do not depend on $\bar{\mathbf{Y}}$ , we obtain the minimization problem
159
+
160
+ $$
161
+ \min _ {\bar {\mathbf {Y}} \in \mathbb {R} ^ {k \times T}} \frac {1}{2 T ^ {2}} \operatorname {T r} \left(\bar {\mathbf {Y}} ^ {\top} \bar {\mathbf {Y}} \bar {\mathbf {Y}} ^ {\top} \bar {\mathbf {Y}} - 2 \bar {\mathbf {Y}} ^ {\top} \bar {\mathbf {Y}} \bar {\mathbf {X}} ^ {\top} \mathbf {C} _ {x x} ^ {- 1} \bar {\mathbf {X}}\right). \tag {10}
162
+ $$
163
+
164
+ By introducing dynamical matrix variables $\mathbf{W}$ and $\mathbf{M}$ , which will correspond to synaptic weights, we can rewrite the minimization problem (10) as a min-max problem:
165
+
166
+ $$
167
+ \min _ {\bar {\mathbf {Y}} \in \mathbb {R} ^ {k \times T}} \min _ {\mathbf {W} \in \mathbb {R} ^ {k \times n}} \max _ {\mathbf {M} \in \mathcal {S} _ {+ +} ^ {k}} L (\mathbf {W}, \mathbf {M}, \bar {\mathbf {Y}}),
168
+ $$
169
+
170
+ where $S_{++}^{k}$ denotes the set of $k \times k$ positive definite matrices and
171
+
172
+ $$
173
+ L (\mathbf {W}, \mathbf {M}, \bar {\mathbf {Y}}) := \frac {1}{T} \operatorname {T r} \left(\bar {\mathbf {Y}} ^ {\top} \mathbf {M} \bar {\mathbf {Y}} - 2 \bar {\mathbf {Y}} ^ {\top} \mathbf {W} \bar {\mathbf {X}}\right) - \operatorname {T r} \left(\frac {1}{2} \mathbf {M} ^ {2} - \mathbf {W C} _ {x x} \mathbf {W} ^ {\top}\right). \tag {11}
174
+ $$
175
+
176
+ This step can be verified by differentiating $L(\mathbf{W}, \mathbf{M}, \bar{\mathbf{Y}})$ with respect to $\mathbf{W}$ and $\mathbf{M}$ and noting that the optimal values are achieved when $\mathbf{W}$ and $\mathbf{M}$ equal $\frac{1}{T}\bar{\mathbf{Y}}\bar{\mathbf{X}}^{\top}\mathbf{C}_{xx}^{-1}$ and $\frac{1}{T}\bar{\mathbf{Y}}\bar{\mathbf{Y}}^{\top}$ , respectively. Finally, we interchange the order of minimization with respect to $\bar{\mathbf{Y}}$ and $\mathbf{W}$ , as well as the order of optimization with respect to $\bar{\mathbf{Y}}$ and with respect to $\mathbf{M}$ :
177
+
178
+ $$
179
+ \min _ {\mathbf {W} \in \mathbb {R} ^ {k \times m}} \max _ {\mathbf {M} \in \mathcal {S} _ {+ +} ^ {k}} \min _ {\bar {\mathbf {Y}} \in \mathbb {R} ^ {k \times T}} L (\mathbf {W}, \mathbf {M}, \bar {\mathbf {Y}}). \tag {12}
180
+ $$
181
+
182
+ The second interchange is justified by the fact that $L(\mathbf{W}, \mathbf{M}, \bar{\mathbf{Y}})$ satisfies the saddle point property with respect to $\bar{\mathbf{Y}}$ and $\mathbf{M}$ , which follows from the fact that $L(\mathbf{W}, \mathbf{M}, \bar{\mathbf{Y}})$ is strictly convex in $\mathbf{Y}$ (since $\mathbf{M}$ is positive definite) and strictly concave in $\mathbf{M}$ .
183
+
184
+ # 4.2 Offline algorithm
185
+
186
+ In the offline, or batch, setting, we have access to the sample covariance matrices $\mathbf{C}_{xx}$ and $\mathbf{C}_{\bar{x}\bar{x}}$ , and we solve the min-max problem (12) by alternating optimization steps. First, for fixed $\mathbf{W}$ and $\mathbf{M}$ , we minimize the objective function $L(\mathbf{W},\mathbf{M},\bar{\mathbf{Y}})$ over $\bar{\mathbf{Y}}$ , to obtain
187
+
188
+ $$
189
+ \bar {\mathbf {Y}} = \mathbf {M} ^ {- 1} \mathbf {W} \bar {\mathbf {X}}. \tag {13}
190
+ $$
191
+
192
+ With $\bar{\mathbf{Y}}$ fixed, we then perform a gradient descent-ascent step with respect to $\mathbf{W}$ and $\mathbf{M}$ :
193
+
194
+ $$
195
+ \mathbf {W} \leftarrow \mathbf {W} + 2 \eta \left(\frac {1}{T} \bar {\mathbf {Y}} \bar {\mathbf {X}} ^ {\top} - \mathbf {W C} _ {x x}\right) \tag {14}
196
+ $$
197
+
198
+ $$
199
+ \mathbf {M} \leftarrow \mathbf {M} + \frac {\eta}{\tau} \left(\frac {1}{T} \bar {\mathbf {Y}} \bar {\mathbf {Y}} ^ {\top} - \mathbf {M}\right). \tag {15}
200
+ $$
201
+
202
+ Here $\tau > 0$ is the ratio of the learning rates of $\mathbf{W}$ and $\mathbf{M}$ and $\eta \in (0, \tau)$ is the (possibly time-dependent) learning rate for $\mathbf{W}$ . The condition $\eta < \tau$ ensures that matrix $\mathbf{M}$ remains positive definite given a positive definite initialization.
203
+
204
+ # 4.3 Online algorithm
205
+
206
+ In the online setting, the expanded signal $\{\mathbf{x}_t\}$ is streamed one sample at a time, and the algorithm must compute its output without storing any significant fraction of the data in memory. In this case, at each time-step $t$ , we compute the output $\mathbf{y}_t = \mathbf{M}^{-1}\mathbf{a}_t$ , where $\mathbf{a}_t \coloneqq \mathbf{W}\mathbf{x}_t$ is the projection of $\mathbf{x}_t$ onto the $k$ -dimensional "slow" subspace, in a biologically plausible manner by running the following fast (neural) dynamics to equilibrium (our algorithm implements these dynamics using an Euler approximation):
207
+
208
+ $$
209
+ \frac {d \mathbf {y} _ {t} (\gamma)}{d \gamma} = \mathbf {a} _ {t} - \mathbf {M y} _ {t} (\gamma). \tag {16}
210
+ $$
211
+
212
+ To update the (synaptic) matrices $\mathbf{W}$ and $\mathbf{M}$ , we replace the covariance matrices in (14)-(15) with the rank-1 stochastic approximations:
213
+
214
+ $$
215
+ \frac {1}{T} \bar {\mathbf {Y}} \bar {\mathbf {X}} ^ {\top} \mapsto \bar {\mathbf {y}} _ {t} \bar {\mathbf {x}} _ {t} ^ {\top}, \quad \quad \quad \quad \frac {1}{T} \bar {\mathbf {Y}} \bar {\mathbf {Y}} ^ {\top} \mapsto \bar {\mathbf {y}} _ {t} \bar {\mathbf {y}} _ {t} ^ {\top}, \quad \quad \quad \quad \mathbf {C} _ {x x} \mapsto \mathbf {x} _ {t} \mathbf {x} _ {t} ^ {\top}.
216
+ $$
217
+
218
+ This yields the following stochastic gradient descent-ascent steps with respect to $\mathbf{W}$ and $\mathbf{M}$ :
219
+
220
+ $$
221
+ \mathbf {W} \leftarrow \mathbf {W} + 2 \eta \left(\bar {\mathbf {y}} _ {t} \bar {\mathbf {x}} _ {t} ^ {\top} - \mathbf {a} _ {t} \mathbf {x} _ {t} ^ {\top}\right)
222
+ $$
223
+
224
+ $$
225
+ \mathbf {M} \leftarrow \mathbf {M} + \frac {\eta}{\tau} \left(\bar {\mathbf {y}} _ {t} \bar {\mathbf {y}} _ {t} ^ {\top} - \mathbf {M}\right).
226
+ $$
227
+
228
+ We can now state our online SFA algorithm, which we refer to as Bio-SFA (Alg. 1).
229
+
230
+ Algorithm 1: Bio-SFA
231
+ input expanded signal $\{\mathbf{x}_0,\mathbf{x}_1,\dots ,\mathbf{x}_T\}$ ; dimension $k$ parameters $\gamma ,\eta ,\tau$
232
+ initialize matrix W and positive definite matrix M
233
+ for $t = 1,2,\ldots ,T$ do $\mathbf{a}_t\gets \mathbf{W}\mathbf{x}_t$ project inputs repeat $\mathbf{y}_t\gets \mathbf{y}_t + \gamma (\mathbf{a}_t - \mathbf{My}_t)$ compute neural output until convergence $\bar{\mathbf{x}}_t\gets \mathbf{x}_t + \mathbf{x}_{t - 1}$ $\bar{\mathbf{y}}_t\gets \mathbf{y}_t + \mathbf{y}_{t - 1}$ $\mathbf{W}\leftarrow \mathbf{W} + 2\eta (\bar{\mathbf{y}}_t\bar{\mathbf{x}}_t^\top -\mathbf{a}_t\mathbf{x}_t^\top)$ synaptic updates $\mathbf{M}\leftarrow \mathbf{M} + \frac{\eta}{\tau} (\bar{\mathbf{y}}_t\bar{\mathbf{y}}_t^\top -\mathbf{M})$
234
+ end for
235
+
236
+ # 5 Biologically plausible neural network implementation
237
+
238
+ We now demonstrate that Bio-SFA can be implemented in a biologically plausible network, depicted in Fig. 1. Recall that we define a network to be biologically plausible if it computes its output in the online setting and has local learning rules. The neural network consists of an input layer of $m$ neurons (blue circles) and an output layer of $k$ neurons with separate dendritic and somatic compartments (black circles with 2 compartments). At each time $t$ , the $m$ -dimensional expanded signal $\mathbf{x}_t$ , which is represented by the activity of the input neurons, is multiplied by the weight matrix $\mathbf{W}$ , which is encoded by the feedforward synapses connecting the input neurons to the output neurons (green lines). This yields the $k$ -dimensional projection $\mathbf{a}_t = \mathbf{W}\mathbf{x}_t$ , which is represented in the dendritic compartment of the output neurons and then propagated to the somatic compartments. This is followed by the fast recurrent neural dynamics Eq. (16) amongst the somatic compartments of the output neurons, where the matrix $\mathbf{M}$ is encoded by the lateral synapses connecting the layer of output neurons (red lines). These fast neural dynamics equilibrate at $\mathbf{y}_t = \mathbf{M}^{-1}\mathbf{a}_t$ . The $k$ -dimensional output signal $\mathbf{y}_t$ is represented by the activity of the output neurons.
239
+
240
+ The synaptic updates are as follows. Recall that $\bar{\mathbf{x}}_t = \mathbf{x}_t + \mathbf{x}_{t - 1}$ (resp. $\bar{y}_t = \mathbf{y}_t + \mathbf{y}_{t - 1}$ ) is the delayed sum of the inputs (resp. outputs), which we assume are represented in the $m$ input neurons (resp. $k$ output neurons). Biologically, they can be represented by slowly changing concentrations (e.g., calcium) at the pre- and post-synaptic terminals. We can write the elementwise synaptic updates in Alg. 1 as
241
+
242
+ $$
243
+ W _ {i j} \leftarrow W _ {i j} + 2 \eta \left(\bar {y} _ {t} ^ {i} \bar {x} _ {t} ^ {j} - a _ {t} ^ {i} x _ {t} ^ {j}\right), \quad 1 \leq i \leq k, 1 \leq j \leq d, \tag {17}
244
+ $$
245
+
246
+ $$
247
+ M _ {i j} \leftarrow M _ {i j} + \frac {\eta}{\tau} \left(\bar {y} _ {t} ^ {i} \bar {y} _ {t} ^ {j} - M _ {i j}\right), \quad 1 \leq i, j \leq k. \tag {18}
248
+ $$
249
+
250
+ Since the $j^{\mathrm{th}}$ input neuron stores the variables $x_{t}^{j}$ , $\bar{x}_t^j$ and the $i^{\mathrm{th}}$ output neuron stores the variables $a_{t}^{i},y_{t}^{i},\bar{y}_{t}^{i}$ , the update for each synapse is local.
251
+
252
+ It is worth comparing the derived updates to the feedforward weights Eq. (17) to the updates proposed by Földiák [8], which are given by
253
+
254
+ $$
255
+ w _ {i j} \leftarrow w _ {i j} + \eta \left(\bar {y} _ {t} ^ {i} x _ {t} ^ {j} - \bar {y} _ {t} ^ {i} w _ {i j}\right), \quad 1 \leq i \leq k, 1 \leq j \leq d.
256
+ $$
257
+
258
+ The first terms in the updates, $\bar{y}_t^i\bar{x}_t^j$ and $\bar{y}_t^i x_t^j$ , are quite similar. The main difference between the updates is between the second terms: $a_{t}^{i}x_{t}^{j}$ and $\bar{y}_t^i w_{ij}$ . In our network, the second term $a_{t}^{i}x_{t}^{j}$ serves to whiten the inputs in our network, whereas Földiák's second term $\bar{y}_t^i w_{ij}$ is added as a decay to ensure the weights remain bounded. In addition, our network includes lateral weights $M_{ij}$ which ensure that the projections $y_{t}^{i}$ are distinct, and such lateral weights are not included in Földiák's network. While the updates are similar in some respects, it is difficult to compare the outputs of the networks because Földiák's network is postulated rather than derived from a principled objective function, so the network must be simulated numerically in order to evaluate its output.
259
+
260
+ # 6 Experiments
261
+
262
+ To validate our approach, we test Bio-SFA (Alg. 1) on synthetic and naturalistic datasets. We provide an overview of the experiments here and defer detailed descriptions and additional figures to Sec. B of the supplement. The evaluation code is available at https://github.com/flatironinstitute/bio-sfa.
263
+
264
+ To measure the performance of our algorithm, we compare the "slowness" of the projection $\mathbf{Y} = \mathbf{M}^{-1}\mathbf{W}\mathbf{X}$ , with the slowest possible projection. This can be quantified using the objective (6). We first evaluate the objective (6) at its optimum:
265
+
266
+ $$
267
+ \lambda_ {\text {s l o w}} := \min \left\{\operatorname {T r} \mathbf {V} ^ {\top} \mathbf {C} _ {\dot {x} \dot {x}} \mathbf {V}: \mathbf {V} \in \mathbb {R} ^ {m \times k} \text {s . t .} \mathbf {V} ^ {\top} \mathbf {C} _ {x x} \mathbf {V} = \mathbf {I} _ {k} \right\}
268
+ $$
269
+
270
+ which can be evaluated using an offline generalized eigenvalue problem solver. To compute the error at each iteration, we compare the slowness of the current projection to the minimal slowness:
271
+
272
+ $$
273
+ \operatorname {E r r o r} = \tilde {\mathbf {V}} ^ {\top} \mathbf {C} _ {\dot {x} \dot {x}} \tilde {\mathbf {V}} - \lambda_ {\text {s l o w}}, \quad \tilde {\mathbf {V}} := \mathbf {W} ^ {\top} \mathbf {M} ^ {- 1} \left(\mathbf {M} ^ {- 1} \mathbf {W} \mathbf {C} _ {x x} \mathbf {W} ^ {\top} \mathbf {M} ^ {- 1}\right) ^ {- 1 / 2}, \tag {19}
274
+ $$
275
+
276
+ where the normalization ensures that $\tilde{\mathbf{V}}$ satisfies the constraint in Eq. (6). In Sec. B, we show that $\mathbf{V}$ indeed asymptotically satisfies the constraint in Eq. (6).
277
+
278
+ # 6.1 Chaotic time series
279
+
280
+ Before testing on naturalistic datasets, we test Bio-SFA on a challenging synthetic dataset. Let $\{\gamma_t\}$ be a (slow) driving force equal to the sum of 6 sine functions with random amplitudes, frequencies and phases, Fig. 2a (red line). Define the noisy series derived from the recursive logistic map with time-varying growth rate: $z_{t} = (3.6 + 0.4\gamma_{t})z_{t - 1}(1 - z_{t - 1})$ , Fig. 2b (black dots). Wiskott [30] showed that the driving force $\{\gamma_t\}$ can be recovered from the noisy series $\{z_t\}$ by implementing (offline) Quadratic SFA on the 4-dimensional signal $\{\mathbf{s}_t\}$ whose components correspond to the values of the noisy series over the 4 most recent time steps, i.e., $\mathbf{s}_t \coloneqq (z_t, z_{t - 1}, z_{t - 2}, z_{t - 3})$ . We replicate the results from [30] using Bio-SFA. Let $\{\mathbf{x}_t\}$ be the 14-dimensional quadratic expansion of $\{\mathbf{s}_t\}$ . We use Bio-SFA to extract the slowest one-dimensional projection $\{y_t\}$ , Fig. 2c (green dots). Qualitatively, we see that the slowest projection recovered by Bio-SFA closely aligns with the slow driving force $\{\gamma_t\}$ . In Fig. 2d we plot the error at each iteration.
281
+
282
+ ![](images/8b780c3645435b197debf383643e9e549b72fee9b12381ed1b70406286918a62.jpg)
283
+ (a) Driving force
284
+
285
+ ![](images/93142a664af83feb28078522e7e8b2c510b968ce50513d5a7850062a37b15986.jpg)
286
+ (b) Noisy series
287
+
288
+ ![](images/76071c7ca750e6b5b0b45a37117b0605fac92378613b72ee69fa020fc7ec5e2e.jpg)
289
+ (c) Bio-SFA output
290
+ Figure 2: Performance of Bio-SFA on a noisy series generated by a logistic map with slow driving force. Panels (a), (b) and (c) depict the final 5000 time steps (out of $5 \times 10^{7}$ time steps) of the normalized driving force $\{\gamma_t\}$ (red line), noisy series $\{z_t\}$ (black dots), and Bio-SFA output $\{y_t\}$ (green dots). Panel (d) shows the mean error and $90\%$ confidence intervals over 10 runs.
291
+
292
+ ![](images/a247cadadc4b433eec92f957e4eba6c0670fe39d51b2f97ebf562fe34518176e.jpg)
293
+ (d) Error
294
+
295
+ # 6.2 Sequence of natural images
296
+
297
+ Next, we test Bio-SFA on a sequence of natural images. First, a 256-dimensional sequence $\{\mathbf{z}_t\}$ was generated by moving a $16\times 16$ patch over 13 natural images from [12] via translations, zooms, and rotations, Fig. 3a. To extract relevant features, we follow the exact same procedure as Berkes and Wiskott [1], but replace the offline SFA solver with Bio-SFA to generate a 49-dimensional output signal $\{\mathbf{y}_t\}$ . To visualize the 49-dimensional output, we calculate the unit vector $\mathbf{z} \in \mathbb{R}^{256}$ that maximizes $y^i$ , for $i = 1,\dots,49$ . These optimal stimuli, $\mathbf{z}$ , which are displayed as $16\times 16$ patches in Fig. 3b, resemble Gabor patches and are in qualitative agreement with physiological characteristics of complex cells in the visual cortex. This aligns with the results in [1]; see also, [2]. To evaluate the performance of Bio-SFA, we plot the error at each iteration in Fig. 3c.
298
+
299
+ ![](images/47927f147276278f07d2cd6f73a249f4c723ec7646ae80406e0f8837bbd231c5.jpg)
300
+ (a) Image sequence generation
301
+
302
+ ![](images/fee322a3d331c2932a49625a27352f77cdc1c5059f5d24ac2ebf27ba8b0cc2fb.jpg)
303
+ (b) Optimal stimuli
304
+
305
+ ![](images/cec05aa9315220c74e86fbbcef1a54ac0cd0ffa44ff02dea33ededaff0385cae.jpg)
306
+ (c) Error
307
+ Figure 3: Performance of Bio-SFA on a sequence of natural images. Panel (a) illustrates the generation of the sequence. Panel (b) shows the maximally excitatory stimuli for the 49-dimensional output obtained by Bio-SFA. Panel (c) depicts the mean error and $90\%$ confidence intervals over 10 runs.
308
+
309
+ # 6.3 Hierarchical SFA on the visual stream of a simulated rat
310
+
311
+ Following Schonfeld and Wiskott [25], we test a hierarchical 3-layer organization of Bio-SFA "modules" on the inputs from the RatLab framework [24], which simulates the field of view of a rat with random trajectories in a rectangular room. Each layer consists of spatially distributed modules that receive overlapping patches of either the visual stream or the preceding layer. Inside each module, there are 3 steps: (1) Bio-SFA first reduces the dimension of the inputs to generate a 32-dimensional signal, (2) the reduced signal is quadratically expanded, and (3) Bio-SFA reduces the expanded signal to the slowest 32 features. The layers are organized so that the modules in each successive layer receive inputs from larger patches of the visual field, Fig. 4a. Adopting the procedure in [25], the
312
+
313
+ ![](images/050e626753223776e2a5c9a05a0728409fd49ee5b0a4803c5cc39b62b2beccb9.jpg)
314
+ (a) Layered architecture
315
+
316
+ ![](images/f38b705cd521b8906b7579afe606b688965c39d12d2f14c00feb0d738892ba62.jpg)
317
+ (b) SFA firing maps
318
+
319
+ ![](images/87c4fc0e5b27224a573609151aff0f4d13de221d6533a26cdc0e88e6cab380dc.jpg)
320
+ (c) ICA firing maps
321
+
322
+ ![](images/43cc904efa74b3b19c7c6a44e9027acc918b6f490385a2f9b983279cf33a8cba.jpg)
323
+ (d) Slowness of SFA output
324
+ Figure 4: Performance of hierarchical Bio-SFA on a visual stream of a simulated rat. Panel (a) displays a schematic of the layered architecture and the operations within each module. Panels (b) and (c) depict the firing maps of the units in the final SFA layer and the subsequent ICA layer. Each rectangle shows the response of a component of the output as a function of the simulated rat's position within the rectangular room. Panel (d) shows the slowness of each layer's output at each iteration for a single trial.
325
+
326
+ network is trained greedily layer-by-layer with weight sharing across modules in each layer (see Sec. B of the supplement). The final layer consists of a single module, with a 32-dimensional output, whose spatially-dependent firing maps are shown in Fig. 4b. The 3 SFA layers are followed by a fourth layer, which performs sparse coding via Independent Component Analysis (ICA) [11] (in the offline setting) with a 32-dimensional output, whose firing map is shown in Fig. 4c. As in [25], the firing maps of the final ICA layer are spatially localized and resemble the firing maps of place cells in the hippocampus. To quantify the performance of this hierarchical network, we plot the slowness (not errors, see Sec. B of the supplement) of each of the first 3 layers' outputs at each iteration, Fig. 4d.
327
+
328
+ # 7 Discussion
329
+
330
+ We derived an online algorithm for SFA with a biologically plausible neural network implementation, which is an important step towards understanding how the brain could use temporal slowness as a computational principle. While our network implementation satisfies natural requirements for biological plausibility, it differs from biological neural circuits in a number of ways. For instance, our network includes direct lateral inhibitory synapses between excitatory neurons, whereas inhibition is typically modulated by interneurons in biological networks. By adapting the approach in [21], interneurons can be introduced to modulate inhibition. Second, the synaptic updates in our network require both the pre- and post-synaptic neurons to store slow variables; however, signal frequencies in dendrites are slower than in axons, suggesting that it is more likely for slow variables to be stored in the post-synaptic neuron, not the pre-synaptic neuron. We can address this with a modification, which is exact when the expanded signal $\{\mathbf{x}_t\}$ exhibits time-reversibility, so that only the post-synaptic represents slow variables; see Sec. C of the supplement. Finally, our network includes linear neurons, which do not respect the nonnegativity constraints of neuronal outputs. An interesting future direction is to understand the effect of enforcing a nonnegativity constraint on $\mathbf{y}_t$ in the objective function (9).
331
+
332
+ # Broader impact
333
+
334
+ An important problem in neuroscience is to understand the computational principles the brain uses to process information. Progress on this front has the potential to have wide ranging benefits for helping to manage the adverse effects of neurological diseases and disorders. This work represents a small step in that direction.
335
+
336
+ # Acknowledgements
337
+
338
+ This work was internally supported by the Simons Foundation. We thank Yanis Bahroun, Nicholas Chua, Shiva Farashahi, Johannes Friedrich, Alexander Genkin, Jason Moore, Anirvan Sengupta and Tiberiu Tesileanu for helpful comments and feedback on an earlier draft of this work.
339
+
340
+ # References
341
+
342
+ [1] Pietro Berkes and Laurenz Wiskott. Applying slow feature analysis to image sequences yields a rich repertoire of complex cell properties. In International Conference on Artificial Neural Networks, pages 81-86. Springer, 2002.
343
+ [2] Pietro Berkes and Laurenz Wiskott. Slow feature analysis yields a rich repertoire of complex cell properties. Journal of Vision, 5(6):9-9, 2005.
344
+ [3] T. Blaschke, P. Berkes, and L. Wiskott. What is the relationship between slow feature analysis and independent component analysis? Neural Computation, 18(10):2495-2508, 2006.
345
+ [4] T. Blaschke and L. Wiskott. Cubica: independent component analysis by simultaneous third- and fourth-order cumulant diagonalization. IEEE Transactions on Signal Processing, 52(5):1250–1256, 2004.
346
+ [5] David Clark, Jesse Livezey, and Kristofer Bouchard. Unsupervised discovery of temporal structure in noisy data with dynamical components analysis. In Advances in Neural Information Processing Systems, pages 14267-14278, 2019.
347
+ [6] Trevor F Cox and Michael AA Cox. Multidimensional Scaling. Chapman and Hall/CRC, 2000.
348
+
349
+ [7] Felix Creutzig and Henning Sprekeler. Predictive coding and the slowness principle: An information-theoretic approach. Neural Computation, 20(4):1026-1041, 2008.
350
+ [8] Peter Földiák. Learning invariance from transformation sequences. Neural Computation, 3(2):194-200, June 1991.
351
+ [9] Mathias Franzius, Henning Sprekeler, and Laurenz Wiskott. Slowness and sparseness lead to place, head-direction, and spatial-view cells. PLoS Computational Biology, 3(8):e166, 2007.
352
+ [10] A. Hyvarinen and E. Oja. Independent component analysis: algorithms and applications. Neural Networks, 13(4-5):411-430, June 2000.
353
+ [11] Aapo Hyvarinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE transactions on Neural Networks, 10(3):626-634, 1999.
354
+ [12] Aapo Hyvärinen and Erkki Oja. Independent component analysis: Algorithms and applications. Neural Networks, 13(4-5):411-430, 2000.
355
+ [13] Christof Koch and Tomaso Poggio. Multiplying with synapses and neurons. In *Single neuron computation*, pages 315–345. Elsevier, 1992.
356
+ [14] Varun Raj Kompella, Matthew Luciw, and Jürgen Schmidhuber. Incremental slow feature analysis: Adaptive low-complexity slow feature updating from high-dimensional input streams. Neural Computation, 24(11):2994-3024, 2012.
357
+ [15] David Lipshutz, Yanis Bahroun, Siavash Golkar, Anirvan M. Sengupta, and Dmitri B. Chklovskii. A biologically plausible neural network for multi-channel canonical correlation analysis. arXiv preprint arXiv:2010.00525, 2020.
358
+ [16] Stephan Liwicki, Stefanos Zafeiriou, and Maja Pantic. Incremental slow feature analysis with indefinite kernel for online temporal video segmentation. In Computer Vision – ACCV 2012, volume 7725, pages 162–176. Springer Berlin Heidelberg, 2013.
359
+ [17] Zeeshan Khawar Malik, Amir Hussain, and Jonathan Wu. Novel biologically inspired approaches to extracting online information from temporal data. Cognitive Computation, 6(3):595-607, 2014.
360
+ [18] Bartlett W Mel and Christof Koch. Sigma-pi learning: On radial basis functions and cortical associative learning. In Advances in Neural Information Processing Systems, pages 474-481, 1990.
361
+ [19] Graeme Mitchison. Removing time variation with the anti-Hebbian differential synapse. Neural Computation, 3(3):312-320, 1991.
362
+ [20] Frank Noé and Cecilia Clementi. Kinetic distance and kinetic maps from molecular dynamics simulation. Journal of Chemical Theory and Computation, 11(10):5002-5011, September 2015.
363
+ [21] Cengiz Pehlevan and Dmitri Chklovskii. A normative theory of adaptive dimensionality reduction in neural networks. In Advances in Neural Information Processing Systems, pages 2269-2277, 2015.
364
+ [22] Guillermo Pérez-Hernández, Fabian Paul, Toni Giorgino, Gianni De Fabritiis, and Frank Noé. Identification of slow molecular order parameters for Markov model construction. The Journal of Chemical Physics, 139(1):015102, 2013.
365
+ [23] David E Rumelhart, Geoffrey E Hinton, James L McClelland, et al. A general framework for parallel distributed processing. Parallel distributed processing: Explorations in the microstructure of cognition, 1(45-76):26, 1986.
366
+ [24] Fabian Schonfeld and Laurenz Wiskott. RatLab: an easy to use tool for place code simulations. Frontiers in Computational Neuroscience, 7, 2013.
367
+ [25] Fabian Schonfeld and Laurenz Wiskott. Modeling place field activity with hierarchical slow feature analysis. Frontiers in Computational Neuroscience, 9, 2015.
368
+ [26] Christian R. Schwantes and Vijay S. Pande. Modeling molecular kinetics with tICA and the kernel trick. Journal of Chemical Theory and Computation, 11(2):600-608, 2015.
369
+ [27] Mohammad M. Sultan and Vijay S. Pande. tICA-metadynamics: Accelerating metadynamics by using kinetically selected collective variables. Journal of Chemical Theory and Computation, 13(6):2440-2447, 2017.
370
+
371
+ [28] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proceedings of the Royal Society of London. Series B: Biological Sciences, 265(1394):359-366, 1998.
372
+ [29] Björn Weghenkel and Laurenz Wiskott. Slowness as a proxy for temporal predictability: An empirical comparison. Neural Computation, 30(5):1151-1179, 2018.
373
+ [30] Laurenz Wiskott. Estimating driving forces of nonstationary time series with slow feature analysis. arXiv preprint cond-mat/0312317, 2003.
374
+ [31] Laurenz Wiskott and Terrence J Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural Computation, 14(4):715-770, 2002.
375
+ [32] Bardia Yousefi and Chu Kiong Loo. Development of fast incremental slow feature analysis (f-IncSFA). In The 2012 International Joint Conference on Neural Networks (IJCNN). IEEE, June 2012.
376
+ [33] Qingfu Zhang and Yiu Wing Leung. A class of learning algorithms for principal component analysis and minor component analysis. IEEE Transactions on Neural Networks, 11(1):200-204, 2000.
377
+ [34] Zhang Zhang and Dacheng Tao. Slow feature analysis for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(3):436-450, 2012.
378
+ [35] Tiziano Zito. Modular toolkit for data processing (MDP): a Python data processing framework. Frontiers in Neuroinformatics, 2(8), 2008.
abiologicallyplausibleneuralnetworkforslowfeatureanalysis/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a44b85209392c66775ef9ec85e786a786b384230249bc6813bd8469036b5587
3
+ size 410238
abiologicallyplausibleneuralnetworkforslowfeatureanalysis/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff5d53aaeaf19e6881c971867ae903465c1ce74ca2b312bb617ce0d042d56b6f
3
+ size 450651
abooleantaskalgebraforreinforcementlearning/2c300620-8875-40a9-bdce-6433206b3d30_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20fe7dcd4e544d233101f24f9892e68c674772dab8344962e5435e42622b5bed
3
+ size 73124
abooleantaskalgebraforreinforcementlearning/2c300620-8875-40a9-bdce-6433206b3d30_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c27245231b88b4f3e5e7f2e628ec907c95ba3babda68bf1e3c03fef65bb11714
3
+ size 89341