Add Batch 5f774b13-06f2-4ac4-8df7-3b4128a37995
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- imapimplicitmappingandpositioninginrealtime/2bba5ba1-bbad-4c72-a9e0-81c1ddc11d8a_content_list.json +3 -0
- imapimplicitmappingandpositioninginrealtime/2bba5ba1-bbad-4c72-a9e0-81c1ddc11d8a_model.json +3 -0
- imapimplicitmappingandpositioninginrealtime/2bba5ba1-bbad-4c72-a9e0-81c1ddc11d8a_origin.pdf +3 -0
- imapimplicitmappingandpositioninginrealtime/full.md +337 -0
- imapimplicitmappingandpositioninginrealtime/images.zip +3 -0
- imapimplicitmappingandpositioninginrealtime/layout.json +3 -0
- imghumimplicitgenerativemodelsof3dhumanshapeandarticulatedpose/36a7c835-a69c-46aa-a068-2da3104f3ca8_content_list.json +3 -0
- imghumimplicitgenerativemodelsof3dhumanshapeandarticulatedpose/36a7c835-a69c-46aa-a068-2da3104f3ca8_model.json +3 -0
- imghumimplicitgenerativemodelsof3dhumanshapeandarticulatedpose/36a7c835-a69c-46aa-a068-2da3104f3ca8_origin.pdf +3 -0
- imghumimplicitgenerativemodelsof3dhumanshapeandarticulatedpose/full.md +280 -0
- imghumimplicitgenerativemodelsof3dhumanshapeandarticulatedpose/images.zip +3 -0
- imghumimplicitgenerativemodelsof3dhumanshapeandarticulatedpose/layout.json +3 -0
- inasintegralnasfordeviceawaresalientobjectdetection/b7e80f1c-cf18-43db-a58d-e3d942282fd8_content_list.json +3 -0
- inasintegralnasfordeviceawaresalientobjectdetection/b7e80f1c-cf18-43db-a58d-e3d942282fd8_model.json +3 -0
- inasintegralnasfordeviceawaresalientobjectdetection/b7e80f1c-cf18-43db-a58d-e3d942282fd8_origin.pdf +3 -0
- inasintegralnasfordeviceawaresalientobjectdetection/full.md +336 -0
- inasintegralnasfordeviceawaresalientobjectdetection/images.zip +3 -0
- inasintegralnasfordeviceawaresalientobjectdetection/layout.json +3 -0
- ipokepokingastillimageforcontrolledstochasticvideosynthesis/96db01a0-9287-4a41-a8c2-26903cab6f59_content_list.json +3 -0
- ipokepokingastillimageforcontrolledstochasticvideosynthesis/96db01a0-9287-4a41-a8c2-26903cab6f59_model.json +3 -0
- ipokepokingastillimageforcontrolledstochasticvideosynthesis/96db01a0-9287-4a41-a8c2-26903cab6f59_origin.pdf +3 -0
- ipokepokingastillimageforcontrolledstochasticvideosynthesis/full.md +332 -0
- ipokepokingastillimageforcontrolledstochasticvideosynthesis/images.zip +3 -0
- ipokepokingastillimageforcontrolledstochasticvideosynthesis/layout.json +3 -0
- mdalumultisourcedomainadaptationandlabelunificationwithpartialdatasets/ec3dc6da-bcdf-4075-945a-6df245e914e8_content_list.json +3 -0
- mdalumultisourcedomainadaptationandlabelunificationwithpartialdatasets/ec3dc6da-bcdf-4075-945a-6df245e914e8_model.json +3 -0
- mdalumultisourcedomainadaptationandlabelunificationwithpartialdatasets/ec3dc6da-bcdf-4075-945a-6df245e914e8_origin.pdf +3 -0
- mdalumultisourcedomainadaptationandlabelunificationwithpartialdatasets/full.md +353 -0
- mdalumultisourcedomainadaptationandlabelunificationwithpartialdatasets/images.zip +3 -0
- mdalumultisourcedomainadaptationandlabelunificationwithpartialdatasets/layout.json +3 -0
- vonmisesfisherlossanexplorationofembeddinggeometriesforsupervisedlearning/2293c123-d314-493b-86a6-7904e80ced43_content_list.json +3 -0
- vonmisesfisherlossanexplorationofembeddinggeometriesforsupervisedlearning/2293c123-d314-493b-86a6-7904e80ced43_model.json +3 -0
- vonmisesfisherlossanexplorationofembeddinggeometriesforsupervisedlearning/2293c123-d314-493b-86a6-7904e80ced43_origin.pdf +3 -0
- vonmisesfisherlossanexplorationofembeddinggeometriesforsupervisedlearning/full.md +334 -0
- vonmisesfisherlossanexplorationofembeddinggeometriesforsupervisedlearning/images.zip +3 -0
- vonmisesfisherlossanexplorationofembeddinggeometriesforsupervisedlearning/layout.json +3 -0
- whenpigsflycontextualreasoninginsyntheticandnaturalscenes/526d1cec-6da5-4743-96b9-a19fc9e7a96b_content_list.json +3 -0
- whenpigsflycontextualreasoninginsyntheticandnaturalscenes/526d1cec-6da5-4743-96b9-a19fc9e7a96b_model.json +3 -0
- whenpigsflycontextualreasoninginsyntheticandnaturalscenes/526d1cec-6da5-4743-96b9-a19fc9e7a96b_origin.pdf +3 -0
- whenpigsflycontextualreasoninginsyntheticandnaturalscenes/full.md +296 -0
- whenpigsflycontextualreasoninginsyntheticandnaturalscenes/images.zip +3 -0
- whenpigsflycontextualreasoninginsyntheticandnaturalscenes/layout.json +3 -0
- where2actfrompixelstoactionsforarticulated3dobjects/71ab66ad-201d-4d83-adce-eab14dcf3f2b_content_list.json +3 -0
- where2actfrompixelstoactionsforarticulated3dobjects/71ab66ad-201d-4d83-adce-eab14dcf3f2b_model.json +3 -0
- where2actfrompixelstoactionsforarticulated3dobjects/71ab66ad-201d-4d83-adce-eab14dcf3f2b_origin.pdf +3 -0
- where2actfrompixelstoactionsforarticulated3dobjects/full.md +329 -0
- where2actfrompixelstoactionsforarticulated3dobjects/images.zip +3 -0
- where2actfrompixelstoactionsforarticulated3dobjects/layout.json +3 -0
- whereareyouheadingdynamictrajectorypredictionwithexpertgoalexamples/22c0ccfa-3d70-4f8c-acb0-c3b91ebcaf58_content_list.json +3 -0
- whereareyouheadingdynamictrajectorypredictionwithexpertgoalexamples/22c0ccfa-3d70-4f8c-acb0-c3b91ebcaf58_model.json +3 -0
imapimplicitmappingandpositioninginrealtime/2bba5ba1-bbad-4c72-a9e0-81c1ddc11d8a_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ca83a7917c596f8b3046b882c2015e8c387fa6ca9c4b83d32a32a526a51ed253
|
| 3 |
+
size 70780
|
imapimplicitmappingandpositioninginrealtime/2bba5ba1-bbad-4c72-a9e0-81c1ddc11d8a_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3b62632aa1039e77183fb2b2d2cc037cfd75a2f29fdab324130cf8f2384eb89f
|
| 3 |
+
size 86131
|
imapimplicitmappingandpositioninginrealtime/2bba5ba1-bbad-4c72-a9e0-81c1ddc11d8a_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6cc3d61093ec067a157ed5a0b59e610ab3ecdc63836e3d2fd46bd4957557c085
|
| 3 |
+
size 2855293
|
imapimplicitmappingandpositioninginrealtime/full.md
ADDED
|
@@ -0,0 +1,337 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# iMAP: Implicit Mapping and Positioning in Real-Time
|
| 2 |
+
|
| 3 |
+
Edgar Sucar<sup>1</sup> Shikun Liu<sup>1</sup> Joseph Ortiz<sup>2</sup> Andrew J. Davison<sup>1</sup>
|
| 4 |
+
<sup>1</sup>Dyson Robotics Lab, Imperial College London
|
| 5 |
+
<sup>2</sup>Robot Vision Lab, Imperial College London
|
| 6 |
+
|
| 7 |
+
{e.sucar18, shikun.liu17, j.ortiz, a.davison}@imperial.ac.uk
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
We show for the first time that a multilayer perceptron (MLP) can serve as the only scene representation in a real-time SLAM system for a handheld RGB-D camera. Our network is trained in live operation without prior data, building a dense, scene-specific implicit 3D model of occupancy and colour which is also immediately used for tracking.
|
| 12 |
+
|
| 13 |
+
Achieving real-time SLAM via continual training of a neural network against a live image stream requires significant innovation. Our iMAP algorithm uses a keyframe structure and multi-processing computation flow, with dynamic information-guided pixel sampling for speed, with tracking at $10\mathrm{Hz}$ and global map updating at $2\mathrm{Hz}$ . The advantages of an implicit MLP over standard dense SLAM techniques include efficient geometry representation with automatic detail control and smooth, plausible filling-in of unobserved regions such as the back surfaces of objects.
|
| 14 |
+
|
| 15 |
+
# 1. Introduction
|
| 16 |
+
|
| 17 |
+
A real-time Simultaneous Localisation and Mapping (SLAM) system for an intelligent embodied device must incrementally build a representation of the 3D world, to enable both localisation and scene understanding. The ideal representation should precisely encode geometry, but also be efficient, with the memory capacity available used adaptively in response to scene size and complexity; predictive, able to plausibly estimate the shape of regions not directly observed; and flexible, not needing a large amount of training data or manual adjustment to run in a new scenario.
|
| 18 |
+
|
| 19 |
+
Implicit neural representations are a promising recent advance in off-line reconstruction, using a multilayer perceptron (MLP) to map a query 3D point to occupancy or colour, and optimising it from scratch to fit a specific scene. An MLP is a general implicit function approximator, able to represent variable detail with few parameters and without quantisation artifacts. Even without prior training, the inherent priors present in the network structure allow it to make watertight geometry estimates from partial data, and
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1: Room reconstruction from real-time iMAP with an Azure Kinect RGB-D camera, showing watertight scene model, camera tracking and automatic keyframe set.
|
| 23 |
+
|
| 24 |
+
plausible completion of unobserved regions.
|
| 25 |
+
|
| 26 |
+
In this paper, we show for the first time that an MLP can be used as the only scene representation in a real-time SLAM system using a hand-held RGB-D camera. Our randomly-initialised network is trained in live operation and we do not require any prior training data. Our iMAP system is designed with a keyframe structure and multi-processing computation flow reminiscent of PTAM [11]. In a tracking process, running at over $10\mathrm{Hz}$ , we align live RGB-D observations with rendered depth and colour predictions from the MLP scene map. In parallel, a mapping process selects and maintains a set of historic keyframes whose viewpoints span the scene, and uses these to continually train and improve the MLP, while jointly optimising the keyframe poses.
|
| 27 |
+
|
| 28 |
+
In both tracking and mapping, we dynamically sample the most informative RGB-D pixels to reduce geometric uncertainty, achieving real-time speed. Our system runs in Python, and all optimisation is via a standard PyTorch framework [20] on a single desktop CPU/GPU system.
|
| 29 |
+
|
| 30 |
+
By casting SLAM as a continual learning problem, we achieve a representation which can represent scenes efficiently with continuous and adaptive resolution, and with a remarkable ability to smoothly interpolate to achieve complete, watertight reconstruction (Fig. 1). With around 10 - 20 keyframes, and an MLP with only 1 MB of parameters, we can accurately map whole rooms. Our scene representation has no fixed resolution; the distribution of keyframes automatically achieves efficient multi-scale mapping.
|
| 31 |
+
|
| 32 |
+
We demonstrate our system on a wide variety of real-world sequences and do exhaustive evaluation and ablative analysis on 8 scenes from the room-scale Replica Dataset [29]. We show that iMAP can make a more complete scene reconstruction than standard dense SLAM systems with significantly smaller memory footprint. We show competitive tracking performance on the TUM RGB-D dataset [30] against state-of-the-art SLAM systems.
|
| 33 |
+
|
| 34 |
+
To summarise, the key contributions of the paper are:
|
| 35 |
+
|
| 36 |
+
- The first dense real-time SLAM system that uses an implicit neural scene representation and is capable of jointly optimising a full 3D map and camera poses.
|
| 37 |
+
- The ability to incrementally train an implicit scene network in real-time, enabled by automated keyframe selection and loss guided sparse active sampling.
|
| 38 |
+
- A parallel implementation (fully in PyTorch [20] with multi-processing) of our presented SLAM formulation which works online with a hand-held RGB-D camera.
|
| 39 |
+
|
| 40 |
+
# 2. Related Work
|
| 41 |
+
|
| 42 |
+
Visual SLAM Systems Real-time visual SLAM systems for modelling environments are often built in a layered manner, where a sparse representation is used for localisation and more detailed geometry or semantics is layered on top. However, here we work in the 'dense SLAM' paradigm pioneered in [18, 17] where a unified dense scene representation is also the basis for camera tracking. Dense representations avoid arbitrary abstractions such as keypoints, enable tracking and relocalisation in robust invariant ways, and have long-term appeal as sensor-agnostic, unified, complete representations of spaces.
|
| 43 |
+
|
| 44 |
+
Some approaches in dense SLAM explicitly represent surfaces [8, 37], but direct representation of volume is desirable to enable a full range of applications such as planning. Standard representations for volume using using occupancy or signed distance functions are very expensive in terms of memory if a fixed resolution is used [17]. Hierarchical approaches [6, 34, 24] are more efficient, but are complicated to implement and usually offer only a small range of level of detail. In either case, the representations are rather rigid, and not amenable to joint optimisation with camera poses, due to the huge number of parameters they use.
|
| 45 |
+
|
| 46 |
+
Machine learning can discover low-dimensional embeddings of dense structure which enable efficient, jointly optimisable representation. CodeSLAM [1] is one example, but using a depth-map view representation rather than full volumetric 3D. Learning techniques have also been used to improve dense reconstruction but require an existing scan [5] or previous training data [21, 36, 2].
|
| 47 |
+
|
| 48 |
+
Implicit Scene Representation with MLPs Scene representation and graphics have seen much recent progress on using implicit MLP neural models for object reconstruction [19, 14], object compression [33] novel view synthesis [15], and scene completion [27, 3]. Two recent papers [35, 39] have also explored camera pose optimisation. But so far these methods have been considered as an offline tool, with computational requirements on the order of hours, days or weeks. We show that when depth images are available, and when guided sparse sampling is used for rendering and training, these methods are suitable for real-time SLAM.
|
| 49 |
+
|
| 50 |
+
Continual Learning By using a single MLP as a master scene model, we pose real-time SLAM as online continual learning. An effective continual learning system should demonstrate both plasticity (the ability to acquire new knowledge) and stability (preserving old knowledge) [22, 7]. Catastrophic forgetting is a well-known property of neural networks, and is a failure of stability, where new experiences overwrite memories.
|
| 51 |
+
|
| 52 |
+
One line of work on alleviating catastrophic forgetting has focused on protecting representations against new data using relative weighting [10]. This is reminiscent of classic filtering approaches in SLAM such as the EKF [28] and is worth future investigation. Approaches which freeze [23] or consolidate [25] sub-networks after training on each individual task are perhaps too simple and discrete for SLAM.
|
| 53 |
+
|
| 54 |
+
Instead, we direct our attention towards the replay-based approach to continual learning, where previous knowledge is stored either directly in a buffer [13, 22], or compressed in a generative model [12, 26]. We use a straightforward method where keyframes are automatically selected to store and compress past memories. We use loss-guided random sampling of these keyframes in our continually running map update process to periodically replay and strengthen previously-observed scene regions, while continuing to add information via new keyframes. In SLAM terms, this approach is similar to that pioneered by PTAM [11], where a historic keyframe set and repeated global bundle adjustment serve as a long-term scene representation.
|
| 55 |
+
|
| 56 |
+
# 3. iMAP: A Real-Time Implicit SLAM System
|
| 57 |
+
|
| 58 |
+
# 3.1. System Overview
|
| 59 |
+
|
| 60 |
+
Figure 2 overviews how iMAP works. A 3D volumetric map is represented using a fully-connected neural network $F_{\theta}$ that maps a 3D coordinate to colour and volume
|
| 61 |
+
|
| 62 |
+

|
| 63 |
+
Figure 2: iMAP system pipeline.
|
| 64 |
+
|
| 65 |
+
density (Section 3.2). Given a camera pose, we can render the colour and depth of a pixel by accumulating network queries from samples in a back-projected ray (Section 3.3).
|
| 66 |
+
|
| 67 |
+
We map a scene from depth and colour video by incrementally optimising the network weights and camera poses with respect to a sparse set of actively sampled measurements (Section 3.6). Two processes run concurrently: tracking (Section 3.4), which optimises the pose from the current frame with respect to the locked network; and mapping (Section 3.4), which jointly optimises the network and the camera poses of selected keyframes, incrementally chosen based on information gain (Section 3.5).
|
| 68 |
+
|
| 69 |
+
# 3.2. Implicit Scene Neural Network
|
| 70 |
+
|
| 71 |
+
Following the network architecture in NeRF [15], we use an MLP with 4 hidden layers of feature size 256, and two output heads that a 3D coordinate $\mathbf{p} = (x,y,z)$ to a colour and volume density value: $F_{\theta}(\mathbf{p}) = (\mathbf{c},\rho)$ . Unlike NeRF, we do not take into account viewing directions as we are not interested in modelling peculiarities.
|
| 72 |
+
|
| 73 |
+
We apply the Gaussian positional embedding proposed in Fourier Feature Networks [32] to lift the input 3D coordinate into $n$ -dimensional space: $\sin(\mathbf{Bp})$ , with $\mathbf{B}$ an $[n \times 3]$ matrix sampled from a normal distribution with standard deviation $\sigma$ . This embedding serves as input to the MLP and is also concatenated to the second activation layer of the network. Taking inspiration from SIREN [27], we allow optimisation of the embedding matrix $\mathbf{B}$ , implemented as a single fully-connected layer with sine activation.
|
| 74 |
+
|
| 75 |
+
# 3.3. Depth and Colour Rendering
|
| 76 |
+
|
| 77 |
+
Our new differentiable rendering engine, inspired by NeRF [15] and NodeSLAM [31], queries the scene network to obtain depth and colour images from a given view.
|
| 78 |
+
|
| 79 |
+
Given a camera pose $T_{WC}$ and a pixel coordinate $[u, v]$ , we first back-project a normalised viewing direction and transform it into world coordinates: $\mathbf{r} = T_{WC} K^{-1}[u, v]$ , with the camera intrinsics matrix $K$ . We take a set of $N$ samples along the ray $\mathbf{p}_i = d_i \mathbf{r}$ with corresponding depth values $\{d_1, \dots, d_N\}$ , and query the network for a colour and volume density $(\mathbf{c}_i, \rho_i) = F_\theta(\mathbf{p}_i)$ . We follow the strati-
|
| 80 |
+
|
| 81 |
+
fied and hierarchical volume sampling strategies of NeRF.
|
| 82 |
+
|
| 83 |
+
Volume density is transformed into an occupancy probability by multiplying by the inter-sample distance $\delta_{i} = d_{i + 1} - d_{i}$ and passing this through activation function $o_i = 1 - \exp (-\rho_i\delta_i)$ . The ray termination probability at each sample can then be calculated as $w_{i} = o_{i}\prod_{j = 1}^{i - 1}(1 - o_{j})$ . Finally, depth and colour are rendered as the expectations:
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\hat {D} [ u, v ] = \sum_ {i = 1} ^ {N} w _ {i} d _ {i}, \quad \hat {I} [ u, v ] = \sum_ {i = 1} ^ {N} w _ {i} \mathbf {c} _ {i}. \tag {1}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
We can calculate the depth variance along the ray as:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\hat {D} _ {\text {v a r}} [ u, v ] = \sum_ {i = 1} ^ {N} w _ {i} (\hat {D} [ u, v ] - d _ {i}) ^ {2}. \tag {2}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
# 3.4. Joint optimisation
|
| 96 |
+
|
| 97 |
+
We jointly optimise the implicit scene network parameters $\theta$ , and camera poses for a growing set of $W$ keyframes, each of which has associated colour and depth measurements along with an initial pose estimate: $\{I_i, D_i, T_i\}$ .
|
| 98 |
+
|
| 99 |
+
Our rendering function is differentiable with respect to these variables, so we perform iterative optimisation to minimise the geometric and photometric errors for a selected number of rendered pixels $s_i$ in each keyframe.
|
| 100 |
+
|
| 101 |
+
The photometric loss is the L1-norm between the rendered and measured colour values $e_i^p[u, v] = \left| I_i[u, v] - \hat{I}_i[u, v] \right|$ for $M$ pixel samples:
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
L _ {p} = \frac {1}{M} \sum_ {i = 1} ^ {W} \sum_ {(u, v) \in s _ {i}} e _ {i} ^ {p} [ u, v ]. \tag {3}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
The geometric loss measures the depth difference $e_i^g [u,v] = \left|D_i[u,v] - \hat{D}_i[u,v]\right|$ and uses the depth variance as a normalisation factor, down-weighting the loss in uncertain regions such as object borders:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
L _ {g} = \frac {1}{M} \sum_ {i = 1} ^ {W} \sum_ {(u, v) \in s _ {i}} \frac {e _ {i} ^ {g} [ u , v ]}{\sqrt {\hat {D} _ {v a r} [ u , v ]}}. \tag {4}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
We apply the ADAM optimiser [9] on the weighted sum of both losses, with factor $\lambda_{p}$ adjusting the importance given to the photometric error:
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\min _ {\theta , \left\{T _ {i} \right\}} \left(L _ {g} + \lambda_ {p} L _ {p}\right). \tag {5}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
Camera Tracking In online SLAM, close to frame-rate camera tracking is important, as optimisation of smaller displacements is more robust. We run a parallel tracking process that continuously optimises the pose of the latest frame with respect to the fixed scene network at a much higher frame rate than joint optimisation while using the same loss and optimiser. The tracked pose initialisation is refined in the mapping process for selected keyframes.
|
| 120 |
+
|
| 121 |
+
# 3.5. Keyframe Selection
|
| 122 |
+
|
| 123 |
+
Jointly optimising the network parameters and camera poses using all images from a video stream is not computationally feasible. However, since there is huge redundancy in video images, we may represent a scene with a sparse set of representative keyframes, incrementally selected based on information gain. The first frame is always selected to initialise the network and fix the world coordinate frame. Every time a new keyframe is added, we lock a copy of our network to represent a snapshot of our 3D map at that point in time. Subsequent frames are checked against this copy and are selected if they see a significantly new region.
|
| 124 |
+
|
| 125 |
+
For this, we render a uniform set of pixel samples $s$ and calculate the proportion $P$ with a normalised depth error smaller than threshold $t_{D} = 0.1$ , to measure the fraction of the frame already explained by our map snapshot:
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
P = \frac {1}{| s |} \sum_ {(u, v) \in s} \mathbb {1} \left(\frac {\left| D [ u , v ] - \hat {D} [ u , v ] \right|}{D [ u , v ]} < t _ {D}\right). \tag {6}
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
When this proportion falls under a threshold $P < t_{P}$ (we set $t_{P} = 0.65$ ), this frame is added to the keyframe set. The normalised depth error produces adaptive keyframe selection, requiring higher precision, and therefore more closely spaced keyframes, when the camera is closer to objects.
|
| 132 |
+
|
| 133 |
+
Every frame received in the mapping process is used in joint optimisation for a few iterations (between 10 and 20), so our keyframe set is always composed of the selected set along with the continuously changing latest frame.
|
| 134 |
+
|
| 135 |
+
# 3.6. Active Sampling
|
| 136 |
+
|
| 137 |
+
Image Active Sampling Rendering and optimising all image pixels would be expensive in computation and memory. We take advantage of image regularity to render and optimise only a very sparse set of random pixels (200 per image) at each iteration. Further, we use the render loss to guide active sampling in informative areas with higher detail or where reconstruction is not yet precise.
|
| 138 |
+
|
| 139 |
+
Each joint optimisation iteration is divided into two stages. First, we sample a set $s_i$ of pixels, uniformly distributed across each of the keyframe's depth and colour images. These pixels are used to update the network and camera poses, and to calculate the loss statistics. For this, we divide each image into an $[8 \times 8]$ grid, and calculate the average loss inside each square region $R_j$ , $j = \{1, 2, \dots, 64\}$ :
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
L _ {i} [ j ] = \frac {1}{| r _ {j} |} \sum_ {(u, v) \in r _ {j}} e _ {i} ^ {g} [ u, v ] + e _ {i} ^ {p} [ u, v ], \tag {7}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
where $r_j = s_i \cap R_j$ are pixels uniformly sampled from $R_j$ . We normalise these statistics into a probability distribution:
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
f _ {i} [ j ] = \frac {L _ {i} [ j ]}{\sum_ {m = 1} ^ {6 4} L _ {i} [ m ]}. \tag {8}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+

|
| 152 |
+
Figure 3: Image Active Sampling. Left: a loss distribution is calculated across an image grid using the geometric loss from a set of uniform samples. Right: active samples are further allocated proportional to the loss distribution.
|
| 153 |
+
|
| 154 |
+
We use this distribution to re-sample a new set of $n_i \cdot f_i[j]$ uniform samples per region ( $n_i$ is the total samples in each keyframe), allocating more samples to regions with high loss. The scene network is updated with the loss from active samples (in camera tracking only uniform sampling is used). Image active sampling is illustrated in Fig. 3.
|
| 155 |
+
|
| 156 |
+
Keyframe Active Sampling In iMAP, we continuously optimise our scene map with a set of selected keyframes, serving as a memory bank to avoid network forgetting. We wish to allocate more samples to keyframes with a higher loss, because they relate to regions which are newly explored, highly detailed, or that the network started to forget. We follow a process analogous to image active sampling and allocate $n_i$ samples to each keyframe, proportional to the loss distribution across keyframes, See Fig. 4.
|
| 157 |
+
|
| 158 |
+
Bounded Keyframe Selection Our keyframe set keeps growing as the camera moves to new and unexplored regions. To bound joint optimisation computation, we choose a fixed number (3 in the live system) of keyframes at each iteration, randomly sampled according to the loss distribution. We always include the last keyframe and the current live frame in joint optimisation, to compose a bounded window with $W = 5$ constantly changing frames. See Fig. 4.
|
| 159 |
+
|
| 160 |
+

|
| 161 |
+
Figure 4: Keyframe Active Sampling. We maintain a loss distribution over the registered keyframes. The distribution is used for sampling a bounded window of keyframes (red boxes), and for allocating pixel samples in each.
|
| 162 |
+
|
| 163 |
+
# 4. Experimental Results
|
| 164 |
+
|
| 165 |
+
Through comprehensive experiments we evaluate iMAP's 3D reconstruction and tracking, and conduct a detailed ablative analysis of design choices on accuracy and speed. Please see our attached video demonstrations.
|
| 166 |
+
|
| 167 |
+
# 4.1. Experimental Setup
|
| 168 |
+
|
| 169 |
+
Datasets We experiment on both simulated and real sequences. For reconstruction evaluation we use the Replica dataset [29], high quality 3D reconstructions of real room-scale environments, with 5 offices and 3 apartments. For each Replica scene, we render a random trajectory of 2000 RGB-D frames. For raw camera recordings, we capture RGB-D videos using a hand-held Microsoft Azure Kinect on a wide variety of environments, as well as test on the TUM RGB-D dataset [30] to evaluate camera tracking.
|
| 170 |
+
|
| 171 |
+
Implementation Details For all experiments we set the following default parameters: keyframe registration threshold $t_P = 0.65$ , photo-metric loss weighting $\lambda_p = 5$ , keyframe window size $W = 5$ , pixel samples $|s_i| = 200$ , positional embedding size $n = 93$ and sigma $\sigma = 25$ , and 32 coarse and 12 fine bins for rendering. 3D point coordinates are normalised by $\frac{1}{10}$ to be close to the [0,1] range.
|
| 172 |
+
|
| 173 |
+
In online operation from a hand-held camera, streamed images which arrive between processed frames are dropped. For the experiments presented here every captured frame is processed, running at $10\mathrm{Hz}$ . We recover mesh reconstructions if needed by querying occupancy values from the network in a uniform voxel grid and then running marching cubes. Meshing is for visualisation and evaluation purposes and does not form part of our SLAM system.
|
| 174 |
+
|
| 175 |
+
# 4.2. Scene Reconstruction Evaluation
|
| 176 |
+
|
| 177 |
+
Metrics We sample 200,000 points from both ground-truth and reconstructed meshes, and calculate three quan-
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
Figure 5: Reconstruction and tracking results for Replica room-0 along with registered keyframes.
|
| 181 |
+
|
| 182 |
+

|
| 183 |
+
Figure 6: iMAP (left) manages to fill in unobserved regions which can be seen as holes in TSDF fusion (right).
|
| 184 |
+
|
| 185 |
+

|
| 186 |
+
|
| 187 |
+
titative metrics: Accuracy (cm): the average distance between sampled points from the reconstructed mesh and the nearest ground-truth point; Completion (cm): the average distance between sampled points from the ground-truth mesh and the nearest reconstructed; and Completion Ratio (<5cm %): the percentage of points in the reconstructed mesh with Completion under 5 cm.
|
| 188 |
+
|
| 189 |
+
The ability to jointly optimise a 3D map along with camera poses gives our system the capacity to build full globally coherent scene reconstructions as seen in Fig. 1 and 7, and accurate camera tracking as shown in Fig. 5. The robustness and versatility of iMAP is demonstrated on a wide variety of real world recordings, through the reconstructions in Fig. 9 and 8 that show its ability to work at scales from whole rooms to small objects and thin structures.
|
| 190 |
+
|
| 191 |
+
We compare scene reconstructions from iMAP with TSDF fusion [4, 17], which is representative of fusion-based dense SLAM methods. To isolate reconstruction, we use the camera tracking produced by iMAP for TSDF fusion. The most significant advantage of our implicit representation is the ability to fill in unobserved regions as shown in Figs. 7 and 8. iMAP achieves on average a $4\%$ higher completion ratio across all 8 Replica scenes as seen in Table 1, with an improvement of $11\%$ in office-3.
|
| 192 |
+
|
| 193 |
+
Memory consumption for iMAP and TSDF fusion with different configuration settings is shown in Table 2. With default values of $256^{3}$ voxel resolution in TSDF fusion and 256 network width in iMAP, our system can represent scenes with a factor of 60 less memory usage while obtaining similar reconstruction accuracy as seen in Table 1.
|
| 194 |
+
|
| 195 |
+
When using a real camera, in addition to better completion our method outperforms TSDF fusion in places where a depth camera does not give accurate readings as is common for black objects (Fig. 8d), and reflective or transparent surfaces (Fig. 6). This performance can be attributed to
|
| 196 |
+
|
| 197 |
+
<table><tr><td></td><td></td><td>room-0</td><td>room-1</td><td>room-2</td><td>office-0</td><td>office-1</td><td>office-2</td><td>office-3</td><td>office-4</td><td>Avg.</td></tr><tr><td rowspan="4">iMAP</td><td>#Keyframes</td><td>11</td><td>12</td><td>12</td><td>10</td><td>11</td><td>10</td><td>14</td><td>11</td><td>13.37</td></tr><tr><td>Acc.[cm]</td><td>3.58</td><td>3.69</td><td>4.68</td><td>5.87</td><td>3.71</td><td>4.81</td><td>4.27</td><td>4.83</td><td>4.43</td></tr><tr><td>Comp.[cm]</td><td>5.06</td><td>4.87</td><td>5.51</td><td>6.11</td><td>5.26</td><td>5.65</td><td>5.45</td><td>6.59</td><td>5.56</td></tr><tr><td>Comp.Ratio [<5cm %]</td><td>83.91</td><td>83.45</td><td>75.53</td><td>77.71</td><td>79.64</td><td>77.22</td><td>77.34</td><td>77.63</td><td>79.06</td></tr><tr><td rowspan="3">TSDF Fusion</td><td>Acc.[cm]</td><td>4.21</td><td>3.08</td><td>2.88</td><td>2.70</td><td>2.66</td><td>4.27</td><td>4.07</td><td>3.70</td><td>3.45</td></tr><tr><td>Comp.[cm]</td><td>5.04</td><td>4.35</td><td>5.40</td><td>10.47</td><td>10.29</td><td>6.43</td><td>6.26</td><td>4.78</td><td>6.63</td></tr><tr><td>Comp.Ratio [<5cm %]</td><td>76.90</td><td>79.87</td><td>77.79</td><td>79.60</td><td>71.93</td><td>71.66</td><td>65.87</td><td>77.11</td><td>75.09</td></tr></table>
|
| 198 |
+
|
| 199 |
+
Table 1: Reconstruction results for 8 indoor Replica scenes. We report the highest reached completion ratio in each scene along with the corresponding accuracy and completion values at that point.
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
Figure 7: Replica reconstructions, highlighting how iMAP fills in unobserved regions which are white holes in TSDF fusion.
|
| 203 |
+
|
| 204 |
+

|
| 205 |
+
Figure 8: Comparative reconstruction results in various real scenes mapped with an Azure Kinect. White holes in the TDSF fusion results are plausibly filled in by iMAP.
|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
Figure 9: Real-time reconstruction results from iMAP in a variety of real world settings.
|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
|
| 212 |
+

|
| 213 |
+
|
| 214 |
+

|
| 215 |
+
|
| 216 |
+
<table><tr><td rowspan="2">iMAP [MB]</td><td>Width = 128</td><td>Width = 256</td><td>Width = 512</td></tr><tr><td>0.26</td><td>1.04</td><td>4.19</td></tr><tr><td rowspan="2">TSDF Fusion [MB]</td><td>Res. = 128</td><td>Res. = 256</td><td>Res. = 512</td></tr><tr><td>8.38</td><td>67.10</td><td>536.87</td></tr></table>
|
| 217 |
+
|
| 218 |
+
the photometric loss for reconstruction combined with the interpolation capacity of the map network.
|
| 219 |
+
|
| 220 |
+
# 4.3. TUM Evaluation
|
| 221 |
+
|
| 222 |
+
We run iMAP on three sequences from TUM RGB-D. Tracking ATE RMSE is shown in Table 3. We compare with surfel-based BAD-SLAM [24], TSDF fusion Kintinuous [38], and sparse ORB-SLAM2 [16], state-of-the-art SLAM systems. In pose accuracy, iMAP does not outperform them, but is competitive with errors between 2 and 6 cm. Mesh reconstructions are shown in Figure 10. In Figure 11 we highlight how iMAP fills in holes in unobserved regions unlike BAD-SLAM.
|
| 223 |
+
|
| 224 |
+
Table 2: Memory consumption: for iMAP as a function of network size, and for TSDF fusion of voxel resolution.
|
| 225 |
+
|
| 226 |
+
<table><tr><td></td><td>fr1/desk (cm)</td><td>fr2/xyz (cm)</td><td>fr3/office (cm)</td></tr><tr><td>iMAP</td><td>4.9</td><td>2.0</td><td>5.8</td></tr><tr><td>BAD-SLAM</td><td>1.7</td><td>1.1</td><td>1.73</td></tr><tr><td>Kintinous</td><td>3.7</td><td>2.9</td><td>3.0</td></tr><tr><td>ORB-SLAM2</td><td>1.6</td><td>0.4</td><td>1.0</td></tr></table>
|
| 227 |
+
|
| 228 |
+
Table 3: ATE RMSE in cm on TUM RGB-D dataset.
|
| 229 |
+
|
| 230 |
+
# 4.4. Ablative Analysis
|
| 231 |
+
|
| 232 |
+
We analyse the design choices that affect our system using the largest Replica scene: office-2 with three different random seeds. Completion ratio results and timings are shown in Table 4. We found that network width $= 256$ , keyframe window size limit of $W = 5$ , and 200 pixels samples per frame offered the best trade-off of convergence speed and accuracy. We further show in Fig. 12 that active
|
| 233 |
+
|
| 234 |
+

|
| 235 |
+
|
| 236 |
+

|
| 237 |
+
|
| 238 |
+

|
| 239 |
+
Figure 10: iMAP reconstruction results for TUM dataset.
|
| 240 |
+
|
| 241 |
+

|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
Figure 11: Hole filling capacity of iMAP (top) against BAD-SLAM (bottom).
|
| 247 |
+
|
| 248 |
+

|
| 249 |
+
|
| 250 |
+
sampling enables faster accuracy convergence and higher scene completion than random sampling.
|
| 251 |
+
|
| 252 |
+
These design choices enable our online implicit SLAM system to run at $10\mathrm{Hz}$ for tracking and $2\mathrm{Hz}$ for mapping. Our experiments demonstrate the power of ran
|
| 253 |
+
|
| 254 |
+
<table><tr><td rowspan="2"></td><td rowspan="2">Default</td><td colspan="2">Width</td><td colspan="2">Window</td><td colspan="2">Pixels</td></tr><tr><td>128</td><td>512</td><td>3</td><td>10</td><td>100</td><td>400</td></tr><tr><td>Tracking Time [ms]</td><td>101</td><td>80</td><td>173</td><td>84</td><td>144</td><td>74</td><td>160</td></tr><tr><td>Joint Optim. Time [ms]</td><td>448</td><td>357</td><td>777</td><td>373</td><td>647</td><td>340</td><td>716</td></tr><tr><td>Comp. Ratio [<5cm %]</td><td>77.22</td><td>75.79</td><td>76.91</td><td>75.82</td><td>77.35</td><td>77.33</td><td>77.49</td></tr></table>
|
| 255 |
+
|
| 256 |
+
Table 4: Timing results for tracking (6 iterations) and mapping (10 iterations), running concurrently on the same GPU. Default configuration: network width 256, window size 5, and 200 samples per keyframe. Last row: completion ratio for Replica office-2.
|
| 257 |
+
|
| 258 |
+
<table><tr><td></td><td>tP=0.55</td><td>tP=0.65</td><td>tP=0.75</td><td>tP=0.85</td></tr><tr><td>#Keyframes</td><td>8</td><td>10</td><td>14</td><td>24</td></tr><tr><td>Comp. Ratio [<5cm %]</td><td>74.11</td><td>77.22</td><td>76.84</td><td>78.03</td></tr></table>
|
| 259 |
+
|
| 260 |
+
Table 5: Number of keyframe and completion ratio results for different selection thresholds in Replica office-2.
|
| 261 |
+
|
| 262 |
+
domised sampling in optimisation, and highlight the key finding that it is better to iterate fast with randomly changing information than to use dense and slow iterations.
|
| 263 |
+
|
| 264 |
+
Combining geometric and photometric losses enables our system to obtain full room scale reconstructions from few keyframes; 13 on average for the 8 Replica scenes in Table 1. Using more keyframes does little to further improve scene completion as shown in Table 5.
|
| 265 |
+
|
| 266 |
+
Implicit scene networks have the property of converging fast to low frequency shapes before adding higher frequency scene details. Fig. 13 shows network training from a static camera averaged over 5 different real scenes. The depth loss falls below $5\mathrm{cm}$ in under a second; under $2\mathrm{cm}$ in 4 seconds; then continues to decrease slowly. When mapping a new
|
| 267 |
+
|
| 268 |
+

|
| 269 |
+
Figure 12: Active sampling obtains better completion with faster accuracy convergence than pure random sampling.
|
| 270 |
+
|
| 271 |
+

|
| 272 |
+
|
| 273 |
+

|
| 274 |
+
Figure 13: Reaching $5\mathrm{cm}$ , $2\mathrm{cm}$ , $1\mathrm{cm}$ and $0.75\mathrm{cm}$ depth error requires around 1, 4, 20, 43 seconds respectively.
|
| 275 |
+
|
| 276 |
+

|
| 277 |
+
Figure 14: Evolution of level of detail.
|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
|
| 281 |
+

|
| 282 |
+
|
| 283 |
+
scene our system takes seconds to get a coarse reconstruction and minutes to add in fine details. In Fig. 14 we show how the system starts with a rough reconstruction and adds detail as the network trains and the camera moves closer to objects. This is a useful property in SLAM as it enables live tracking to work even when moving to unexplored regions.
|
| 284 |
+
|
| 285 |
+
# 5. Conclusions
|
| 286 |
+
|
| 287 |
+
We pose dense SLAM as real-time continual learning and show that an MLP can be trained from scratch as the only scene representation in a live system, thus enabling an RGB-D camera to construct and track against a complete and accurate volumetric model of room-scale scenes. The keys to the real-time but long-term SLAM performance of our method are: parallel tracking and mapping, loss-guided pixel sampling for rapid optimisation, and intelligent keyframe selection as replay to avoid network forgetting. Future directions for iMAP include how to make more structured and compositional representations that reason explicitly about the self similarity in scenes.
|
| 288 |
+
|
| 289 |
+
# Acknowledgements
|
| 290 |
+
|
| 291 |
+
Research presented here has been supported by Dyson Technology Ltd. We thank Kentaro Wada, Tristan Laidlow, and Shuaifeng Zhi for fruitful discussions.
|
| 292 |
+
|
| 293 |
+
# References
|
| 294 |
+
|
| 295 |
+
[1] M. Bloesch, J. Czarnowski, R. Clark, S. Leutenegger, and A. J. Davison. CodeSLAM — learning a compact, optimisable representation for dense visual SLAM. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 2
|
| 296 |
+
[2] Rohan Chabra, Jan Eric Lenssen, Eddy Ilg, Tanner Schmidt, Julian Straub, Steven Lovegrove, and Richard Newcombe. Deep Local Shapes: Learning local SDF priors for detailed 3d reconstruction. Proceedings of the European Conference on Computer Vision (ECCV), 2020. 2
|
| 297 |
+
[3] Julian Chibane, Gerard Pons-Moll, et al. Neural unsigned distance fields for implicit function learning. Neural Information Processing Systems (NIPS), 2020. 2
|
| 298 |
+
[4] B. Curless and M. Levoy. A volumetric method for building complex models from range images. In Proceedings of SIGGRAPH, 1996. 5
|
| 299 |
+
[5] Angela Dai, Christian Diller, and Matthias Nießner. SG-NN: Sparse generative neural networks for self-supervised scene completion of RGB-D scans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2
|
| 300 |
+
[6] Angela Dai, Matthias Nießner, Michael Zollhöfer, Shahram Izadi, and Christian Theobalt. BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Re-integration. ACM Transactions on Graphics (TOG), 36(3):24:1-24:18, 2017. 2
|
| 301 |
+
[7] Stephen Grossberg. How does a brain build a cognitive code? In Studies of mind and brain, pages 1-52. Springer, 1982. 2
|
| 302 |
+
[8] M. Keller, D. Lefloch, M. Lambers, S. Izadi, T. Weyrich, and A. Kolb. Real-time 3D Reconstruction in Dynamic Scenes using Point-based Fusion. In Proc. of Joint 3DIM/3DPVT Conference (3DV), 2013. 2
|
| 303 |
+
[9] Diederik P. Kingma and Jimmy Ba. ADAM: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), 2015. 3
|
| 304 |
+
[10] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521-3526, 2017. 2
|
| 305 |
+
[11] G. Klein and D. W. Murray. Parallel Tracking and Mapping for Small AR Workspaces. In Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR), 2007. 1, 2
|
| 306 |
+
[12] Timothee Lesort, Hugo Caselles-Dupré, Michael Garcia-Ortiz, Andrei Stoian, and David Filliat. Generative models from the perspective of continual learning. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE, 2019. 2
|
| 307 |
+
[13] Davide Maltoni and Vincenzo Lomonaco. Continuous learning in single-incremental-task scenarios. Neural Networks, 116:56-73, 2019. 2
|
| 308 |
+
[14] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy Networks: Learning 3D reconstruction in function space. In Pro
|
| 309 |
+
|
| 310 |
+
ceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2
|
| 311 |
+
[15] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. In Proceedings of the European Conference on Computer Vision (ECCV), 2020. 2, 3
|
| 312 |
+
[16] R. Mur-Artal and J. D. Tardós. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Transactions on Robotics (T-RO), 33(5):1255–1262, 2017. 7
|
| 313 |
+
[17] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohli, J. Shotton, S. Hodges, and A. Fitzgibbon. KinectFusion: Real-Time Dense Surface Mapping and Tracking. In Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR), 2011. 2, 5
|
| 314 |
+
[18] R. A. Newcombe, S. Lovegrove, and A. J. Davison. DTAM: Dense Tracking and Mapping in Real-Time. In Proceedings of the International Conference on Computer Vision (ICCV), 2011. 2
|
| 315 |
+
[19] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. DeepSDF: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2
|
| 316 |
+
[20] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. PyTorch: An imperative style, high-performance deep learning library. In Neural Information Processing Systems (NIPS), 2019. 1, 2
|
| 317 |
+
[21] Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, and Andreas Geiger. Convolutional occupancy networks. In Proceedings of the European Conference on Computer Vision (ECCV), 2020. 2
|
| 318 |
+
[22] David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy Lillicrap, and Gregory Wayne. Experience replay for continual learning. In Neural Information Processing Systems (NIPS), 2019. 2
|
| 319 |
+
[23] Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016. 2
|
| 320 |
+
[24] Thomas Schops, Torsten Sattler, and Marc Pollefeys. BAD SLAM: Bundle adjusted direct RGB-D SLAM. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2, 7
|
| 321 |
+
[25] Jonathan Schwarz, Jelena Luketina, Wojciech M Czarnecki, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. Progress & compress: A scalable framework for continual learning. arXiv preprint arXiv:1805.06370, 2018. 2
|
| 322 |
+
[26] Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. In Neural Information Processing Systems (NIPS), 2017. 2
|
| 323 |
+
[27] Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit neural representa
|
| 324 |
+
|
| 325 |
+
tions with periodic activation functions. Neural Information Processing Systems (NIPS), 2020. 2, 3
|
| 326 |
+
[28] R. C. Smith and P. Cheeseman. On the Representation and Estimation of Spatial Uncertainty. International Journal of Robotics Research (IJRR), 5(4):56-68, Dec. 1986. 2
|
| 327 |
+
[29] Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, et al. The Replica Dataset: A digital replica of indoor spaces. arXiv preprint arXiv:1906.05797, 2019. 2, 5
|
| 328 |
+
[30] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers. A Benchmark for the Evaluation of RGB-D SLAM Systems. In Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS), 2012. 2, 5
|
| 329 |
+
[31] Edgar Sucar, Kentaro Wada, and Andrew Davison. NodeSLAM: Neural object descriptors for multi-view shape reconstruction. In Proceedings of the International Conference on 3D Vision (3DV), 2020. 3
|
| 330 |
+
[32] Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains. Neural Information Processing Systems (NIPS), 2020. 3
|
| 331 |
+
[33] Danhang Tang, Saurabh Singh, Philip A Chou, Christian Hane, Mingsong Dou, Sean Fanello, Jonathan Taylor, Philip Davidson, Onur G Guleryuz, Yinda Zhang, et al. Deep implicit volume compression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2
|
| 332 |
+
[34] Emanuele Vespa, Nikolay Nikolov, Marius Grimm, Luigi Nardi, Paul HJ Kelly, and Stefan Leutenegger. Efficient octree-based volumetric SLAM supporting signed-distance and occupancy mapping. IEEE Robotics and Automation Letters, 2018. 2
|
| 333 |
+
[35] Zirui Wang, Shangzhe Wu, Weidi Xie, Min Chen, and Victor Adrian Prisacariu. NeRF-: Neural radiance fields without known camera parameters. arXiv preprint arXiv:2102.07064, 2021. 2
|
| 334 |
+
[36] Silvan Weder, Johannes Schonberger, Marc Pollefeys, and Martin R Oswald. RoutedFusion: Learning real-time depth map fusion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2
|
| 335 |
+
[37] T. Whelan, S. Leutenegger, R. F. Salas-Moreno, B. Glocker, and A. J. Davison. ElasticFusion: Dense SLAM without a pose graph. In Proceedings of Robotics: Science and Systems (RSS), 2015. 2
|
| 336 |
+
[38] T. Whelan, J. B. McDonald, M. Kaess, M. Fallon, H. Johannsson, and J. J. Leonard. Kintinous: Spatially Extended KinectFusion. In Workshop on RGB-D: Advanced Reasoning with Depth Cameras, in conjunction with Robotics: Science and Systems, 2012. 7
|
| 337 |
+
[39] Lin Yen-Chen, Pete Florence, Jonathan T Barron, Alberto Rodriguez, Phillip Isola, and Tsung-Yi Lin. iNeRF: Inverting neural radiance fields for pose estimation. arXiv preprint arXiv:2012.05877, 2020. 2
|
imapimplicitmappingandpositioninginrealtime/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0198caa63e6e23dadeb21a43bd050883594f6d97e9e360a71fa71828254599d6
|
| 3 |
+
size 748119
|
imapimplicitmappingandpositioninginrealtime/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b1b2504943bd9992b876098622fbc8cf7ddd4dc75ee0376b118ec853a7210b8a
|
| 3 |
+
size 365226
|
imghumimplicitgenerativemodelsof3dhumanshapeandarticulatedpose/36a7c835-a69c-46aa-a068-2da3104f3ca8_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0a1902a2b03a1a3f3de0495ca0493dd49544598fb8e136450f5b08e7f662f844
|
| 3 |
+
size 79813
|
imghumimplicitgenerativemodelsof3dhumanshapeandarticulatedpose/36a7c835-a69c-46aa-a068-2da3104f3ca8_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:85035bff32916d34536b3bea92a0a5d23bc61050a1208b3524370e0d58927808
|
| 3 |
+
size 97703
|
imghumimplicitgenerativemodelsof3dhumanshapeandarticulatedpose/36a7c835-a69c-46aa-a068-2da3104f3ca8_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e7db7db0514c5c295b8be5ce1fd0f407992b6b5219737bf6a18cdc0fbf3f7f9d
|
| 3 |
+
size 4655876
|
imghumimplicitgenerativemodelsof3dhumanshapeandarticulatedpose/full.md
ADDED
|
@@ -0,0 +1,280 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# imGHUM: Implicit Generative Models of 3D Human Shape and Articulated Pose
|
| 2 |
+
|
| 3 |
+
Thiemo Alldieck*
|
| 4 |
+
|
| 5 |
+
Hongyi Xu*
|
| 6 |
+
|
| 7 |
+
Cristian Sminchisescu
|
| 8 |
+
|
| 9 |
+
# Google Research
|
| 10 |
+
|
| 11 |
+
{alldieck,hongyixu,sminchisescu}@google.com
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
We present imGHUM, the first holistic generative model of 3D human shape and articulated pose, represented as a signed distance function. In contrast to prior work, we model the full human body implicitly as a function zero-level-set and without the use of an explicit template mesh. We propose a novel network architecture and a learning paradigm, which make it possible to learn a detailed implicit generative model of human pose, shape, and semantics, on par with state-of-the-art mesh-based models. Our model features desired detail for human models, such as articulated pose including hand motion and facial expressions, a broad spectrum of shape variations, and can be queried at arbitrary resolutions and spatial locations. Additionally, our model has attached spatial semantics making it straightforward to establish correspondences between different shape instances, thus enabling applications that are difficult to tackle using classical implicit representations. In extensive experiments, we demonstrate the model accuracy and its applicability to current research problems.
|
| 16 |
+
|
| 17 |
+
# 1. Introduction
|
| 18 |
+
|
| 19 |
+
Mathematical models of the human body have been proven effective in a broad variety of tasks. In the last decades models of varying degrees of realism have been successfully deployed e.g. for 3D human motion analysis [46], 3D human pose and shape reconstruction [24, 52], personal avatar creation [3, 54], medical diagnosis and treatment [16], or image synthesis and video editing [53, 21]. Modern statistical body models are typically learnt from large collections of 3D scans of real people, which are used to capture the body shape variations among the human population. Dynamic scans, when available, can be used to further model how different poses affect the deformation of the muscles and the soft-tissue of the human body.
|
| 20 |
+
|
| 21 |
+
The recently released GHUM model [49] follows this methodology by describing the human body, its shape variation, articulated pose including fingers, and facial expressions as a moderate resolution mesh based on a low-dimensional, partly interpretable parameterization. In the
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
Figure 1. imGHUM is the first parametric full human body model represented as an implicit signed distance function. imGHUM successfully models broad variations in pose, shape, and facial expressions. The level sets of imGHUM are shown in blue-scale.
|
| 27 |
+
|
| 28 |
+

|
| 29 |
+
|
| 30 |
+
deep learning literature GHUM and similar models [27, 23] are typically used as fixed function layers. This means that the model is parameterized with the output of a neural network or some other non-linear function, and the resulting mesh is used to compute the final function value. While this approach works well for several tasks, including, more recently, 3D reconstruction, the question of how to best represent complex 3D deformable and articulated structures is open. Recent work dealing with the 3D visual reconstruction of general objects aimed to represent the output not as meshes but as implicit functions [28, 32, 7, 29]. Such approaches thus describe surfaces by the zero-level-set (decision boundary) of a function over points in 3D-space. This has clear benefits as the output is neither constrained by a template mesh topology, nor is it discretized and thus of fixed spatial resolution.
|
| 31 |
+
|
| 32 |
+
In this work, we investigate the possibility to learn a data-driven statistical body model as an implicit function. Given the maturity of state of the art explicit human models, it is crucial that an equivalent implicit representation maintains their key, attractive properties – representing comparable variation in shape and pose and similar level of detail. This is challenging since recently-proposed implicit function networks tend to produce overly smooth shapes and fail for articulated humans [8]. We propose a novel network architecture and a learning paradigm that enable, for the first time, constructing detailed generative models of human pose, shape, and semantics, represented as Signed Distance Functions (SDFs) (see fig. 1). Our multi-part architecture focuses on difficult to model body components like
|
| 33 |
+
|
| 34 |
+

|
| 35 |
+
Table 1. Comparison of different approaches to model human bodies. GHUM is meshed-based and thus discretized. IGR only allows for shape interpolation. NASA lacks generative capabilities for shape, hands, and facial expressions and only returns occupancy values. Only imGHUM combines all favorable properties.
|
| 36 |
+
|
| 37 |
+
hands and faces. Moreover, imGHUM models its neighborhood through distance values, enabling e.g. collision tests. Our model is not bound to a specific resolution and thus can be easily queried at arbitrary locations. Being template-free further paves the way to our ultimate goal to fairly represent diversity of mankind, including disabilities which may not be always well covered by a generic template of standard topology. Finally, in contrast to recent implicit function networks, our model additionally carries on the explicit semantics of mesh-based models. Specifically, our implicit function also returns correspondences to a canonical representation near and on its zero-level-set, enabling e.g. texturing or body part labeling. This holistic approach is novel and significantly more difficult to produce, as can be noted in prior work which could only demonstrate individual properties, c.f. tab. 1. Our contribution – and the key to success – stems from the novel combination of adequate, generative latent representations, network architectures with fine grained encoding, implicit losses with attached semantics, and the consistent aggregation of multi-part components. Besides extensive evaluation of 3D deformable and articulated modeling capabilities, we also demonstrate surface completion using imGHUM and give an outlook to modeling varying topologies. Our models are available for research [1].
|
| 38 |
+
|
| 39 |
+
# 1.1. Related Work
|
| 40 |
+
|
| 41 |
+
We review developments in 3D human body modeling, variants of implicit function networks, and applications of implicit function networks for 3D human reconstruction.
|
| 42 |
+
|
| 43 |
+
Human Body Models. Parametric human body models based on geometric primitives have been proposed early on [48] and successfully applied e.g. for human reconstruction from video data [36, 46, 45]. SCAPE [35] was one of the first realistic large scale data-driven human body models. Later variants inspired by blend skinning [17] modeled correlations between body shape and pose [15], as well as soft-tissue dynamics [37]. SMPL variants [27, 23, 33, 31] are also popular parametric body models, with linear shape spaces, compatible with standard graphics pipelines and offering good full-body representation functionality.
|
| 44 |
+
|
| 45 |
+
GHUM is a recent parametric model [49] that represents the full body model using deep non-linear models – VAEs for shape and normalizing flows for pose, respectively – with various trainable parameters, learned end-to-end. In this work, we rely on GHUM to build our novel implicit model. Specifically, besides the static and dynamic 3D human scans in our dataset, we also rely on GHUM (1) to represent the latent pose and shape state of our implicit model, (2) to generate supervised training data in the form of latent pose and shape codes with associated 3D point clouds, sampled from the underlying, posed, GHUM mesh.
|
| 46 |
+
|
| 47 |
+
Implicit Function Networks (IFNs) have been proposed recently [28, 32, 7, 29]. Instead of representing shapes as meshes, voxels, or point clouds, IFNs learn a shape space as a function of a low-dimensional global shape code and a 3D point. The function either classifies the point as inside/outside [28, 7] (occupancy networks), or returns its distance to the closest surface [32] (distance functions). The global shape is then defined by the decision boundary or the zero-level-set of this function.
|
| 48 |
+
|
| 49 |
+
Despite advantages over mesh- and voxel-based representations in tasks like e.g. 3D shape reconstruction from partial views or given incomplete data, initial work has limitations. First, while the models can reliably encode rigid axis-aligned shape prototypes, they often fail for more complex shapes. Second, the reconstructions are often overly smooth, hence they lack detail. Different approaches have been presented to address these. Part-based models [13, 22, 12] assemble a global shape from smaller local models. Some methods do not rely on a global shape code but on features computed from convolving with an input observation [8, 10, 34, 9]. Others address such limitations by changing the learning methodology: tailored network initialization [4] and point sampling strategies [50], or second-order losses [14, 44] have been proposed towards this end. We found the latter to be extremely useful and rely on similar losses in this work.
|
| 50 |
+
|
| 51 |
+
IFNs for Human Reconstruction. Recently implicit functions have been explored to reconstruct humans. Huang et al. [18] learn an occupancy network that conditions on image features in a multi-view camera setup. Saito et al. [41] use features from a single image and an estimated normal image [42] together with depth values along camera rays as conditioning variables. ARCH [19] combines implicit function reconstruction and explicit mesh-based human models to represent dressed people. Karunratanakul et al. [25] propose to use SDFs to learn human grasps and augment their SDFs output with sparse regional labels. Similarly to us, Deng et al. [11] represent a pose-able human subject as a number of binary occupancy functions modeled in a kinematic structure. In contrast to our work, this framework is restricted to a single person and the body is only coarsely approximated, lacking facial features and hand detail. Also
|
| 52 |
+
|
| 53 |
+

|
| 54 |
+
Figure 2. Overview of imGHUM. We compute the signed distance $s = S(\mathbf{p}, \boldsymbol{\alpha})$ and the semantics $\mathbf{c} = \mathbf{C}(\mathbf{p}, \boldsymbol{\alpha})$ of a spatial point $\mathbf{p}$ to the surface of an articulated human shape defined by the generative latent code $\boldsymbol{\alpha}$ . Using an explicit skeleton, we transform the point $\mathbf{p}$ into the normalized coordinate frames as $\{\tilde{\mathbf{p}}^j\}$ for $N = 4$ sub-part networks, modeling body, hands, and head. Each sub-model $\{S^j\}$ represents a semantic signed-distance function. The sub-models are finally combined consistently using an MLP $U$ to compute the outputs $s$ and $\mathbf{c}$ for the full body. Our multi-part pipeline builds a full body model as well as sub-part models for head and hands, jointly, in a consistent training loop. On the right, we visualize the zero-level-set body surface extracted with marching cubes and the implicit correspondences to a canonical instance given by the output semantics. The semantics allows e.g. for surface coloring or texturing.
|
| 55 |
+
|
| 56 |
+
related, SCANimate [43] builds personalized avatars from multiple scans of a single person. Concurrent to our work, LEAP [30] learns an occupancy model of human shape and pose also without hand poses, expressions, or semantics. In this work we aim for a full implicit body model, featuring a large range of body shapes corresponding to diverse humans and poses, with detailed hands, and facial expressions.
|
| 57 |
+
|
| 58 |
+
# 2. Methodology
|
| 59 |
+
|
| 60 |
+
In this section, we describe our models and the losses used for training. We introduce two variants: a single-part model that encodes the whole human in a single network and a multi-part model. The latter constructs the full body from the output superposition of four body part networks.
|
| 61 |
+
|
| 62 |
+
Background. We rely on neural networks and implicit functions to generate 3D human shapes and articulated poses. Given a latent representation $\alpha$ of the human shape and pose, together with an underlying probability distribution, we model the posed body as the zero iso-surface decision boundaries of Signed Distance Functions (SDFs) given by deep feed-forward neural networks. A signed distance $S(\mathbf{p},\boldsymbol {\alpha})\in \mathbf{R}$ is a continuous function which, given an arbitrary spatial point $\mathbf{p}\in \mathbf{R}^3$ , outputs the shortest distance to the surface defined by $\alpha$ , where the sign indicates the inside (negative) or outside (positive) side w.r.t. the surface. The posed human body surface is implicitly given by $S(\cdot ,\alpha) = 0$ .
|
| 63 |
+
|
| 64 |
+
GHUM [49] represents the human model as an articulated mesh $\mathbf{X}(\alpha)$ . GHUM has a minimally-parameterized skeleton with $J = 63$ joints (124 Euler angle DOFs), and skinning deformations, explicitly sensitive to the pose kinematics $\theta \in \mathbf{R}^{124}$ . A kinematic prior based on normalizing flows defines the distribution of valid poses [52]. Each kinematic pose $\theta$ represents a set of joint transformations $\mathbf{T}(\theta ,\mathbf{j})\in \mathbf{R}^{J\times 3\times 4}$ from the neutral to a posed state, where $\mathbf{j}\in \mathbf{R}^{J\times 3}$ are the joint centers that are dependent on the neutral body shape. The statistical body shapes are mod
|
| 65 |
+
|
| 66 |
+
eled using a nonlinear embedding $\beta_{b}\in \mathbf{R}^{16}$ . In addition to skeleton articulation, a nonlinear latent code $\beta_{f}\in \mathbf{R}^{20}$ drives facial expressions. The implicit model we design here shares the same probabilistic latent representation as GHUM, $\alpha = (\beta_{b},\beta_{f},\theta)$ , but in contrast to computing an articulated mesh, we estimate a signed distance value $s = S(\mathbf{p},\boldsymbol {\alpha})$ for each arbitrary spatial point $\mathbf{p}$ .
|
| 67 |
+
|
| 68 |
+
# 2.1. Models and Training
|
| 69 |
+
|
| 70 |
+
Given a collection of full-body human meshes $\mathbf{Y}$ , together with the corresponding GHUM encodings $\pmb{\alpha} = (\beta_{b},\beta_{f},\pmb{\theta})$ , our goal is to learn a MLP-based SDF representation $S(\mathbf{p},\pmb{\alpha})$ so that it approximates the shortest signed distance to $\mathbf{Y}$ for any query point $\mathbf{p}$ . Note that $\mathbf{Y}$ could be arbitrary meshes, such as raw human scans, mesh registrations, or samples drawn from the GHUM latent space. The zero iso-surface $S(\cdot ,\alpha) = 0$ is sought to preserve all geometric detail in $\mathbf{Y}$ , including body shapes and poses, hand articulation, and facial expressions.
|
| 71 |
+
|
| 72 |
+
Single-part Network. We formulate one global neural network that decodes $S(\mathbf{p},\alpha)$ for a given latent code $\alpha$ and a spatial point $\mathbf{p}$ . Instead of pre-computing the continuous SDFs from point samples as in DeepSDF [32], we train a MLP network $S(\mathbf{p},\alpha;\omega)$ with weights $\omega$ , similar in spirit to IGR [14], to output a solution to the Eikonal equation
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
\left\| \nabla_ {\mathbf {p}} S (\mathbf {p}, \boldsymbol {\alpha}; \omega) \right\| = 1, \tag {1}
|
| 76 |
+
$$
|
| 77 |
+
|
| 78 |
+
where $S$ is a signed distance function that vanishes at the surface $\mathbf{Y}$ with gradients equal to surface normals. Mathematically, we formulate our total loss as a weighted combination of
|
| 79 |
+
|
| 80 |
+
$$
|
| 81 |
+
L _ {o} (\omega) = \frac {1}{| O |} \sum_ {i \in O} (| S (\mathbf {p} _ {i}, \boldsymbol {\alpha}) | + \| \nabla_ {\mathbf {p} _ {i}} S (\mathbf {p} _ {i}, \boldsymbol {\alpha}) - \mathbf {n} _ {i} \|) \tag {2}
|
| 82 |
+
$$
|
| 83 |
+
|
| 84 |
+
$$
|
| 85 |
+
L _ {e} (\omega) = \frac {1}{| F |} \sum_ {i \in F} \left(\left\| \nabla_ {\mathbf {p} _ {i}} S \left(\mathbf {p} _ {i}, \boldsymbol {\alpha}\right) \right\| - 1\right) ^ {2} \tag {3}
|
| 86 |
+
$$
|
| 87 |
+
|
| 88 |
+
$$
|
| 89 |
+
L _ {l} (\omega) = \frac {1}{| F |} \sum_ {i \in F} \operatorname {B C E} \left(l _ {i}, \phi \left(k S \left(\mathbf {p} _ {i}, \boldsymbol {\alpha}\right)\right)\right), \tag {4}
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
where $\phi$ is the sigmoid function, $O$ is surface samples from $\mathbf{Y}$ with normals $\mathbf{n}$ , and $F$ are off surface samples with inside/outside labels $l$ , consisting of both uniformly sampled points within a bounding box and sampled points near the surface. The first term $L_{o}$ encourages the surface samples to be on the zero-level-set and the SDF gradient to be equal to the given surface normals $\mathbf{n}_i$ . The Eikonal loss $L_{e}$ is derived from (1) where the SDF is differentiable everywhere with gradient norm 1. We obtain the SDF gradient $\nabla_{\mathbf{p}_i}S(\mathbf{p}_i,\boldsymbol {\alpha})$ analytically via network back-propagation. In practice, we also find it useful to include a binary cross-entropy error (BCE) loss $L_{l}$ for off-the-surface samples, where $k$ controls the sharpness of the decision boundary. We use $k = 10$ in our experiments. Our training losses only require surface samples with normals and inside/outside labels for off-surface samples. Those are much easier and faster to obtain than pre-computing ground truth SDF values.
|
| 93 |
+
|
| 94 |
+
Recent work suggests that standard coordinate-based MLP networks encounter difficulties in learning high-frequency functions, a phenomenon referred to as spectral bias [39, 47]. To address this limitation, inspired by [47], we therefore encode our samples using the basic Fourier mapping $\mathbf{e}_i = [\sin (2\pi \tilde{\mathbf{p}}_i),\cos (2\pi \tilde{\mathbf{p}}_i)]^\top$ , where we first un-. pose the samples with the root rigid transformation $\mathbf{T}_0^{-1}$ and normalize them into $[0,1]^3$ with a shared bounding box $\mathbf{B} = [\mathbf{b}_{min},\mathbf{b}_{max}]$ , as
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
\tilde {\mathbf {p}} _ {i} = \frac {\mathbf {T} _ {0} ^ {- 1} (\boldsymbol {\theta} , \mathbf {j}) [ \mathbf {p} _ {i} , 1 ] ^ {\top} - \mathbf {b} _ {\min }}{\mathbf {b} _ {\max } - \mathbf {b} _ {\min }}. \tag {5}
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
Note that our SDF is defined w.r.t. the original meshes $\mathbf{Y}$ and therefore we do not transpose and scale the sample normals. Also, the loss gradients are derived w.r.t. $\mathbf{p}_i$ .
|
| 101 |
+
|
| 102 |
+
Multi-part Network. Our single-part network represents well the global geometric features for various human body shapes and kinematic poses. However, despite its spatial encoding, the network still has difficulties capturing facial expressions and articulated hand poses, where the SDF has local high-frequency variations. To augment geometric detail on face and hands regions, we therefore propose a multipart network that decomposes the human body into $N = 4$ local regions, i.e. the head, left and right hand, and the remaining body, respectively. This significantly reduces spectral frequency variations within each local region allowing the specialized single-part networks to capture local geometric detail. A consistent full-body SDF $S(\mathbf{p},\alpha)$ is composed from the local single-part SDF network outputs $s^j = S^j (\mathbf{p},\alpha), j\in \{1,\dots ,N\}$ .
|
| 103 |
+
|
| 104 |
+
We follow the training protocol described in §2.1 for each local sub-part network with surface and off-surface samples within a bounding box $\mathbf{B}^j$ defined for each part. Note that we use the neck and wrist joints as the root transformation for the head and hands respectively. In GHUM, the joint centers $\mathbf{j}$ are obtained as a function given
|
| 105 |
+
|
| 106 |
+
the neutral body shapes $\bar{\mathbf{X}} (\beta_{b})$ . However, $\bar{\mathbf{X}}$ is not explicitly presented in our implicit representation. Therefore, we build a nonlinear joint regressor from $\beta_{b}$ to $\mathbf{j}$ , which is trained, supervised, using GHUM's latent space sampling.
|
| 107 |
+
|
| 108 |
+
In order to fuse the local SDFs into a consistent full-body SDF, while at the same time preserving local detail, we merge the last hidden layers of the local networks using an additional light-weight MLP $U$ . To train the combined network, a sample point $\mathbf{p}_i$ , defined for the full body, is transformed into the $N$ local coordinate frames using $\mathbf{T}_0^j$ and then passed to the single-part local networks, see fig. 2. The union SDF MLP then aggregates the shortest distance to the full body among the local distances. We apply our losses to the union full-body SDF as well, to ensure that the output for full body satisfies the SDF property (1). Our multi-part pipeline produces sub-part models and a full-body one, trained jointly and leveraging data correlations among different body components.
|
| 109 |
+
|
| 110 |
+
Our spatial point encoding $\mathbf{e}_i$ requires all samples $\mathbf{p}$ to be inside the bounding box $\mathbf{B}$ , which otherwise might result in periodic SDFs due to sinusoidal encoding. However, a point sampled from the full body is likely to be outside of a sub-part's local bounding box $\mathbf{B}^j$ . Instead of clipping or projecting to the bounding box, we augment our encoding of sample $\mathbf{p}_i$ for sub-part networks $S^j$ as $\mathbf{e}_i^j = [\sin (2\pi \tilde{\mathbf{p}}_i^j),\cos (2\pi \tilde{\mathbf{p}}_i^j),\tanh (\pi (\tilde{\mathbf{p}}_i^j -0.5))]^\top$ , where the last value indicates the relative spatial location of the sample w.r.t. the bounding box. If a point $\mathbf{p}_i$ is outside the bounding box $\mathbf{B}^j$ , the union SDF MLP will learn to ignore $S^j (\mathbf{p}_i^j,\boldsymbol {\alpha})$ for the final union output.
|
| 111 |
+
|
| 112 |
+
Implicit Semantics. In contrast to explicit models like GHUM, implicit functions do not naturally come with point correspondences between different shape instances. However, many applications, such as pose tracking, texture mapping, semantic segmentation, surface landmarks, or clothing modeling, largely benefit from such correspondences. Given an arbitrary spatial point, on or near the surface $\mathbf{Y}$ , i.e., $|S(\mathbf{p}_i,\boldsymbol {\alpha})| < \sigma$ , we are therefore interested to interpret its semantics. We define the semantics as a 3D implicit function $\mathbf{C}(\mathbf{p},\boldsymbol {\alpha})\in \mathbf{R}^3$ . Given a query point $\mathbf{p}_i$ , it returns a correspondence point on a canonical GHUM mesh $\mathbf{X}(\boldsymbol{\alpha}_0)$ as
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
\mathbf {C} \left(\mathbf {p} _ {i}, \boldsymbol {\alpha}\right) = \mathbf {w} _ {i} \mathbf {v} _ {f} \left(\boldsymbol {\alpha} _ {0}\right) = \mathbf {c} _ {i}, \quad \mathbf {p} _ {i} ^ {*} = \mathbf {w} _ {i} \mathbf {v} _ {f} (\boldsymbol {\alpha}) \tag {6}
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
where $\mathbf{p}_i^*$ is the closest point of $\mathbf{p}_i$ in the GHUM mesh $\mathbf{X}(\boldsymbol{\alpha})$ with $f$ the nearest face and $\mathbf{w}$ the barycentric weights of the vertex coordinates $\mathbf{v}_f$ . In contrast to alternative semantic encodings, such as 2D texture coordinates, our semantic function $\mathbf{C}(\mathbf{p},\boldsymbol{\alpha})$ is smooth in the spatial domain without distortion and boundary discontinuities, which favors the learning process, c.f. [5].
|
| 119 |
+
|
| 120 |
+
By definition, implicit SDFs return the shortest distance to the underlying implicit surface for a spatial point whereas implicit semantics associate the query point to its closest
|
| 121 |
+
|
| 122 |
+
surface neighbor. Hence, we consider implicit semantics as highly correlated to SDF learning. We co-train both tasks with our augmented multi-part network (§2.1) computing both $S(\mathbf{p},\alpha)$ and $\mathbf{C}(\mathbf{p},\alpha)$ . Semantics are trained fully supervised, using an $L_{1}$ loss for a collection of training sample points near and on the surface $\mathbf{Y}$ . Due to the correlation between tasks, our network is able to predict both signed distance and semantics, without expanding its capacity.
|
| 123 |
+
|
| 124 |
+
Using trained implicit semantics, we can e.g. apply textures to arbitrary iso-surfaces at level set $|z| \leq \sigma$ , reconstructed from our implicit SDF. During inference, an iso-surface mesh $S(\cdot, \alpha) = z$ can be extracted using Marching Cubes [26]. Then for every generated vertex $\tilde{\mathbf{v}}_i$ we query its semantics $\mathbf{C}(\tilde{\mathbf{v}}_i, \alpha)$ . The queried correspondence point $\mathbf{C}(\tilde{\mathbf{v}}_i, \alpha)$ might not be exactly on the canonical surface and therefore we project it onto $\mathbf{X}(\alpha_0)$ . Now, we can interpolate the UV texture coordinates and assign them to $\tilde{\mathbf{v}}_i$ . Similarly, we can also assign segmentation labels or define on- or near-surface landmarks. In fig. 2 (right) we show an imGHUM reconstruction textured and with a binary 'clothing' segmentation. We use the latter throughout the paper demonstrating that our semantics allow the transfer of segmentation labels to different iso-surface reconstructions. Please refer to §3.3 for more applications of our implicit semantics e.g. landmarks or clothed human reconstruction.
|
| 125 |
+
|
| 126 |
+
Architecture. For the single-part network we use a similar feed-forward architecture as DeepSDF [32] or IGR [14] with eight 512-dimensional fully-connected layers. To enable higher-order derivatives, we use Swish nonlinear activation [40] instead of ReLU. IGR originally proposed SoftPlus, however, we found Swish superior (see tab. 3). The multi-part network is composed out of one 8-layer 256-dimensional MLP for the body and three 4-layer 256-dimensional MLPs for hands and head. Each sub-network has a skip connection to the middle layer. The last hidden layers of sub-networks are aggregated in a 128-dimensional fully-connected layer with Swish nonlinear activation, before the final network output. The final model features 2.49 million parameters and performs 4.99 million FLOPs per point query.
|
| 127 |
+
|
| 128 |
+
Dataset. Our training data consists of a collection of full-body human meshes $\mathbf{Y}$ together with the corresponding GHUM latent code $\alpha$ , where $\mathbf{X}(\alpha)$ best approximates $\mathbf{Y}$ . For each mesh, we perform Poisson disk sampling on the surface and obtain $|O| = 32\mathrm{K}$ surface samples, together with their surface normals. In addition, within a predefined $2.2\times 2.8\times 2.2\mathrm{m}^3$ bounding box centered at the origin, we sample $|F| / 2 = 16\mathrm{K}$ points uniformly. Another 16K samples are generated by randomly displacing surface sample points with isotropic normal noise with $\sigma = 0.05\mathrm{m}$ . All off-surface samples are associated with inside/outside labels, computed by casting randomized rays and checking parity.
|
| 129 |
+
|
| 130 |
+
We also label semantics for on and near surface samples, which are drawn with random face indices and barycentric weights of the GHUM mesh and randomly displaced for near-surface samples. With the corresponding face and barycentric weights, semantic labels are generated using (6) in a light-weight computation with no need for projection or nearest neighbor search. Each mesh $Y$ is then decomposed into $N = 4$ parts and we generate the same number of training samples per body part (we use $\sigma = 0.02\mathrm{m}$ for surface samples near the hands).
|
| 131 |
+
|
| 132 |
+
We use two types of human meshes for our imGHUM training. We first randomly sample $75\mathrm{K}$ poses from H36M and the CMU mocap dataset, with Gaussian sampled body shapes, expressions and hand poses from the GHUM latent priors, where $\mathbf{Y}$ are the posed GHUM meshes. In addition, we collect $35\mathrm{K}$ human scans, on which we perform As-Conformal-As-Possible (ACAP) registrations [51] with the GHUM topology and fit GHUM parameters as well. Our human scans include the CAESAR dataset, full body pose scans, as well as close-up head and hand scans. Due to the noise and incompleteness in some of the raw scans we use the registrations for training. We fine-tune imGHUM – initially trained on GHUM sampling – using the registration dataset. In this way, imGHUM can capture geometric detail not well represented by GHUM (see tab. 2).
|
| 133 |
+
|
| 134 |
+
# 3. Experiments
|
| 135 |
+
|
| 136 |
+
We evaluate imGHUM qualitatively and quantitatively in multiple experiments. First, we compare imGHUM with its explicit counterpart GHUM (§3.1). Then, we perform an extensive baseline and ablation study, demonstrating the effect of imGHUM's architecture and training scheme (§3.2). We also build a model to compare to the recent single-subject occupancy model NASA. Finally, we show the performance of imGHUM on three representative applications demonstrating its usefulness and versatility (§3.3).
|
| 137 |
+
|
| 138 |
+
We report three different metrics. Bi-directional Chamfer- $L_{2}$ distance measures the accuracy and completeness of the surface (lower is better). Normal Consistency (NC) evaluates estimated surface normals (higher is better). Volumetric Intersection over Union (IoU) compares the reconstructed volume with the ground truth shape (higher is better). The latter can only be reported for watertight shapes. Please note that metrics not always correlate with the perceived quality of the reconstructions. We therefore additionally include qualitative side-by-side comparisons.
|
| 139 |
+
|
| 140 |
+
For visualization and numerical evaluation we extract meshes from imGHUM using Marching Cubes [26]. To this end, we approximate the bounding box of the surface though probing and then run Marching Cubes with a resolution of $256^{3}$ within the bounding box. Hereby, the signed distances support acceleration using Octree sampling: we use the highest grid density only near the surface and sam
|
| 141 |
+
|
| 142 |
+

|
| 143 |
+
Figure 3. Bodies generated and reconstructed using imGHUM. Left: imGHUM with Gaussian sampling of the shape, expression and pose latent space. Middle: Reconstructed motion sequence from the CMU mocap dataset [2] (fixed body shape). Right: Body shape and facial expressions latent code interpolation (fixed pose). See supplementary material for more examples.
|
| 144 |
+
|
| 145 |
+
ple far less frequently away from it. However, we note that for most applications, such as human reconstruction and collision detection, Marching Cubes are not needed, except only once for the final mesh visualization.
|
| 146 |
+
|
| 147 |
+
# 3.1. Representational Power
|
| 148 |
+
|
| 149 |
+
In fig. 3, we show reconstructions of a motion capture sequence applied to imGHUM. Our model captures well the articulated full-body motion, with consistent body shape for various poses. By sharing the latent priors with GHUM, imGHUM supports realistic body shape and pose generation (fig. 3, left) as well as smooth interpolation within the shape and expression latent spaces (fig. 3, right). Our model generalizes well to novel body shapes, expressions, and poses, and has interpretable and decoupled latent representations.
|
| 150 |
+
|
| 151 |
+
In tab. 2, we compare the representation power of imGHUM with the explicit GHUM on our registration testset. imGHUM better captures present detail as numerically demonstrated. An imGHUM model trained only using GHUM samples captures the body deformation due to articulation less well, indicating that GHUM is a useful surrogate to 'synthetically' bootstrap the training of the implicit network, but that real data is important as well.
|
| 152 |
+
|
| 153 |
+
Limitations of imGHUM are sometimes apparent for very extreme pose configurations that have not been covered in the training set, such as anthropometrically invalid poses that are impossible for a human, e.g. resulting in self-intersection or by bending joints beyond their anatomical range of motion. imGHUM produces plausible results for inputs not too far from expected configurations, but the results occasionally feature some defects e.g. distorted or incomplete geometry or inaccurate semantics, see fig. 8 for examples.
|
| 154 |
+
|
| 155 |
+
# 3.2. Baseline Experiments
|
| 156 |
+
|
| 157 |
+
In the next section, we compare imGHUM to various baselines inspired by recent work. The first is an autoencoder, where the encoder side is PointNet++ [38] and the decoder is our single-part network. The idea is to let the network find the best representation instead of precomputing a low dimensional representation. In practice this means that latent codes are not interpretable. Further,
|
| 158 |
+
|
| 159 |
+
<table><tr><td>Model</td><td>IoU ↑</td><td>Chamfer ×10-3↓</td><td>NC ↑</td></tr><tr><td>imGHUM ‡</td><td>0.900</td><td>0.071</td><td>0.977</td></tr><tr><td>GHUM</td><td>0.913</td><td>0.055</td><td>0.983</td></tr><tr><td>imGHUM</td><td>0.932</td><td>0.040</td><td>0.984</td></tr></table>
|
| 160 |
+
|
| 161 |
+
Table 2. GHUM comparisons on registration dataset. imGHUM marked with $\ddagger$ is trained only based on GHUM sampling data.
|
| 162 |
+
|
| 163 |
+
<table><tr><td>Model</td><td>IoU ↑</td><td>Chamfer ×10-3↓</td><td>NC ↑</td></tr><tr><td>Autoencoder</td><td>0.831</td><td>2.457</td><td>0.923</td></tr><tr><td>Single-part †</td><td>0.957</td><td>0.085</td><td>0.983</td></tr><tr><td>Single-part ⊕</td><td>0.958</td><td>0.070</td><td>0.983</td></tr><tr><td>Single-part</td><td>0.965</td><td>0.052</td><td>0.986</td></tr><tr><td>Single-part deeper †</td><td>0.961</td><td>0.070</td><td>0.984</td></tr><tr><td>Single-part deeper</td><td>0.967</td><td>0.058</td><td>0.986</td></tr><tr><td>imGHUM †</td><td>0.955</td><td>0.095</td><td>0.984</td></tr><tr><td>imGHUM w/o Ll (4)</td><td>0.966</td><td>0.051</td><td>0.988</td></tr><tr><td>imGHUM</td><td>0.969</td><td>0.036</td><td>0.989</td></tr></table>
|
| 164 |
+
|
| 165 |
+
Table 3. Numerical comparison with baselines. Models marked with $\dagger$ don't use the Fourier input mapping. $\oplus$ marks Softplus activation as in [14].
|
| 166 |
+
|
| 167 |
+
<table><tr><td>Model</td><td>IoU ↑
|
| 168 |
+
Head / Hands</td><td>Ch. ×10-3↓
|
| 169 |
+
Head / Hands</td><td>NC ↑
|
| 170 |
+
Head / Hands</td></tr><tr><td>Single-part</td><td>0.967 / 0.818</td><td>0.010 / 0.201</td><td>0.937 / 0.790</td></tr><tr><td>Single-part deep.</td><td>0.968 / 0.832</td><td>0.011 / 0.271</td><td>0.938 / 0.811</td></tr><tr><td>imGHUM</td><td>0.976 / 0.929</td><td>0.007 / 0.031</td><td>0.944 / 0.934</td></tr></table>
|
| 171 |
+
|
| 172 |
+
Table 4. Unidirectional metrics (GT to generated mesh) for critical body parts. Our multi-part architecture significantly improves the head and hand reconstruction accuracy.
|
| 173 |
+
|
| 174 |
+
we experiment with our single part network without Fourier input mapping, largely following the training scheme proposed by IGR [14]. We also use input mapping and finally trained a deeper single-part network variant (10 layers) having roughly the same number of variables as imGHUM.
|
| 175 |
+
|
| 176 |
+
In tab. 3 we report the metrics for different variants on our test set containing 1000 GHUM samples. In fig. 4, we show a side-by-side comparison. The Fourier input mapping consistently improves results for all variants. We have also tried higher-dimensional Fourier features but empirically found the basic encoding to work best in our setting. The auto-encoder produces large artifacts especially in the hand region. Similar problems, large blobs or missing pieces, can be observed in results from single-part variants, especially for the hands and, less severe, also for the facial region. These problems, however, are not well captured by
|
| 177 |
+
|
| 178 |
+
globally evaluating the whole shape. To this end, we evaluate imGHUM and our single-part models specifically for these critical regions, see tab. 4. Only imGHUM consistently produces high-quality results also for hands and the face, supporting the proposed architecture choices.
|
| 179 |
+
|
| 180 |
+
Next, we compare imGHUM to the recent single-subject multi-pose implicit human occupancy model NASA [11]. With a fixed body shape, we generate 22500 random GHUM full-body training poses and 2500 testing poses from Human3.6M [20] and the CMU mocap dataset [2], including head and hand poses. Using the original point sampling strategy in NASA, we have trained the network until convergence, based on the original source code. Please see the supplementary material for details on how we adapted NASA for the GHUM skeleton. For comparison, we have trained an imGHUM architecture with $2 \times$ fewer layers than our full multi-subject model, each with half-dimensionality, using the same dataset. Even though GHUM-based NASA has $3 \times$ more parameters, our smaller-size single-subject imGHUM still performs significantly better in representing both the global shape and local detail (see hand reconstructions in fig. 5). In contrast to NASA, which computes binary occupancy, imGHUM returns more informative signed distance values which produce smooth decision boundaries and preserve the detailed geometry much better. Further key differences to NASA are our considerably simpler architecture that requires far less computation to produce a reconstruction, our semantics, and the carefully chosen learning model (i.e. Fourier encoding, second-order losses) that pays particular attention to surface detail. Moreover, imGHUM additionally models body shape, fingers, and facial expressions using generative latent codes (tab. 1).
|
| 181 |
+
|
| 182 |
+
# 3.3. Applications
|
| 183 |
+
|
| 184 |
+
We apply imGHUM to three key tasks: body surface reconstruction, partial point cloud completion, and dressed and inclusive human reconstruction.
|
| 185 |
+
|
| 186 |
+
Triangle Set Surface Reconstruction. Given a triangle set ('soup') with $n$ vertices $\{\hat{\mathbf{v}}\} \in \mathbf{R}^{3n}$ along with oriented normals $\{\hat{\mathbf{n}}\} \in \mathbf{R}^{3n}$ , we deploy our parametric implicit SDF for surface reconstruction with semantics. This task is necessary for triangle soups produced by 3D scanners. To extract the surface from an incomplete scan, we apply a BFGS optimizer to fit $\alpha = (\beta_{b},\beta_{f},\theta)$ such that all vertices $\hat{\mathbf{v}}$ are close to the implicit surface $S(\cdot ,\alpha) = 0$ . Moreover, we enforce gradients at $\hat{\mathbf{v}}$ to be close to normals $\hat{\mathbf{n}}$ and generated off-surface samples to have distances with the expected signs. In addition, we sample near surface points with a small distance $\eta$ along surface normals, and enforce $S(\hat{\mathbf{v}}\pm \eta \hat{\mathbf{n}},\alpha) = \pm \eta$ , as in [32]. Note that all these operations can be easily implemented and are fully differential due to imGHUM being a SDF. When 3D landmarks are available on the target surface, e.g. as triangulated from
|
| 187 |
+
|
| 188 |
+

|
| 189 |
+
Figure 4. Qualitative comparison with baseline experiments. From left to right: autoencoder, single-part model without and with Fourier input mapping, our multi-part imGHUM, ground-truth GHUM. We use our semantics network to color baseline results.
|
| 190 |
+
|
| 191 |
+

|
| 192 |
+
Figure 5. Comparison with NASA [11] on our single-subject multi-posedataset. Top to bottom: GT, single-subject imGHUM, and NASA reconstructions. imGHUM better captures global and local geometry, despite using a significantly smaller network version in this experiment. Also numerically our results are superior: IoU $(\uparrow)$ 0.962 (ours) vs. 0.839 (theirs), Ch. $(\downarrow)$ $0.068\times 10^{-3}$ (ours) vs. $3.53\times 10^{-3}$ (theirs), NC $(\uparrow)$ 0.985 (ours) vs. 0.903 (theirs).
|
| 193 |
+
|
| 194 |
+
2D detected landmarks of raw scanner images, we additionally augment the optimization with landmark losses based on the imGHUM semantics. Please see the supplementary material for details of the losses.
|
| 195 |
+
|
| 196 |
+
For reference, we also show results on IF-Net [8], a recent method for implicit surface extraction, completion, and voxel super-resolution. We trained IF-Net with the same pose and shape variation as used for imGHUM – presumably much more variation than the 2183 scans in the original paper. In both training and testing we generate 15K random samples from the observed shape and pass them through IF-Net for surface reconstruction. Note that IF-Net is using less information compared to our method, but is also solving an easier task as it is not computing a global and semantically meaningful shape code. An entirely fair comparison is thus not possible. However, we believe that by comparing with IF-Net, we show that imGHUM is adequate for this task. Fig. 6 qualitatively shows examples of both imGHUM fits and of IF-Net inference results for 150 human scans containing 20 subjects. Our model not only fits well to the volume of the scans but also reconstructs the facial expressions and hand poses. Using landmarks and ICP losses, one could also fit GHUM to the triangle sets. However, our fully differential imGHUM losses show superior performance over ICP-based GHUM fitting (Chamfer $(\downarrow)$ $0.77\times 10^{-3}$ , NC $(\uparrow)$ 0.921).
|
| 197 |
+
|
| 198 |
+
Partial Point Cloud Completion. Another relevant task for many applications is shape completion. Here we show
|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
Figure 6. Left: Triangle set surface reconstruction (input scan, imGHUM fit, and IF-Net inference from left to right). Numerically, imGHUM fits are better than IF-Net with Chamfer distance $(\downarrow)0.156\times 10^{-3}$ (ours) vs. $0.844\times 10^{-3}$ (IF-Net), and NC $(\uparrow)$ 0.954 (ours) vs. 0.914 (IF-Net). Right: Partial point cloud completion (input point cloud, imGHUM fit, IF-Net, and ground truth scan).
|
| 202 |
+
|
| 203 |
+
surface reconstruction and completion from partial point clouds as recorded e.g. using a depth sensor. We synthesize depth maps from A-posed scans of 10 subjects from the Faust dataset [6] using the intrinsics and the resolution of a Kinect V2 sensor. To complete the partial view, we search for the $\alpha$ such that all points from the depth point cloud are close to imGHUM's zero-level-set. We sample additional points along surface normals (estimated from depth image gradients) and enforce estimated distances by imGHUM to be close to true distances. We also sample points in front of the depth cloud and around it and enforce their $L_{l}$ label loss. Finally, we also supervise the estimated normals. We do not rely on landmarks or other semantics in this experiment.
|
| 204 |
+
|
| 205 |
+
We show IF-Net [8] results for comparison. We trained IF-Net specifically for this task while we use the same imGHUM for all experiments. Our reconstructions are numerically better with Chamfer distance $(\downarrow)$ $0.103\times 10^{-3}$ (ours) vs. $0.315\times 10^{-3}$ (theirs) and NC $(\uparrow)$ 0.962 (ours) vs. 0.936 (theirs). Qualitatively, our results contain much more of the desirable reconstruction detail, especially for hands and faces, see fig. 6, right. Note, again, that IF-Net only reconstructs a surface while we recover the parametrization of a body model, a considerably harder task.
|
| 206 |
+
|
| 207 |
+
Dressed and Inclusive Human Modeling. imGHUM is template-free which is a valuable property for future developments. While this work deals primarily with the methodology of learning a generative implicit human model – in itself a complex and novel task – we also give an outlook for possible future directions. Building a detailed model of the human body shape including hair and clothing, or learning inclusive models could be such directions. However, currently the data needed for building such models does not exist at large enough scale. To demonstrate that imGHUM is a valuable building block for such models, we leverage it as an inner layer for personalized human models. Concretely, we augment imGHUM with a light-weight residual SDF network, conditioned on the output of imGHUM, both the signed distances and semantics. We estimate the residual model using the same learning scheme as for imGHUM, but limit training to a single scan. The final output models the human with layers, including the inner body shape represented with imGHUM and the personalization (hair, clothing, non-standard body topology) as residuals, c.f. fig. 7. This layered representation can be reposed by changing the parameterization of the underlying imGHUM. Hereby, the
|
| 208 |
+
|
| 209 |
+

|
| 210 |
+
Figure 7. From left to right: scan, GHUM template mesh ACAP registration, imGHUM+residual fit (color-scale represents semantics), reposed imGHUM+residual, imGHUM+residual fits to people with limb differences. In contrast to the fitted template mesh, imGHUM+residual successfully models topologies different from the plain human body and captures more geometric detail.
|
| 211 |
+
|
| 212 |
+

|
| 213 |
+
Figure 8. Failure modes. Interpenetration can lead to unwanted shapes and semantics (leaked hand semantics to the cheek). Extreme poses may produce deformed body parts (thin arms).
|
| 214 |
+
|
| 215 |
+
residual model acts as a fitted layer around imGHUM and deforms according to the distance and semantic field defined by imGHUM. Please see the supplementary material for more examples, a numerical evaluation, and implementation details.
|
| 216 |
+
|
| 217 |
+
# 4. Discussion and Conclusion
|
| 218 |
+
|
| 219 |
+
We introduced imGHUM, the first 3D human body model, with controllable pose and shape, represented as an implicit signed distance function. imGHUM has comparable representation power to state-of-the-art mesh-based models and can represent significant variations in body pose, shape, and facial expressions, as well as underlying, precise, semantics. imGHUM has additional valuable properties, since its underlying implicit SDF represents not only the surface of the body but also its neighborhood, which e.g. enables collision tests with other objects or efficient distance losses. imGHUM can be used to build diverse, fair models of humans who may not match a standard template. This paves the way for transformative research and inclusive applications like modeling clothing, enabling immersive virtual apparel try-on, or free-viewpoint photorealistic visualization. Our models are available for research [1].
|
| 220 |
+
|
| 221 |
+
# References
|
| 222 |
+
|
| 223 |
+
[1] https://github.com/google-research/google-research/tree/master/imghum.2,8
|
| 224 |
+
[2] CMU graphics lab motion capture database. 2009. http://mocap.cs.cmu.edu/.6,7
|
| 225 |
+
[3] Thiemo Alldieck, Marcus Magnor, Bharat Lal Bhatnagar, Christian Theobalt, and Gerard Pons-Moll. Learning to reconstruct people in clothing from a single RGB camera. In IEEE Conf. Comput. Vis. Pattern Recog., pages 1175-1186. IEEE, 2019. 1
|
| 226 |
+
[4] Matan Atzmon and Yaron Lipman. Sal: Sign agnostic learning of shapes from raw data. In IEEE Conf. Comput. Vis. Pattern Recog., pages 2565-2574, 2020. 2
|
| 227 |
+
[5] Bharat Lal Bhatnagar, Cristian Sminchisescu, Christian Theobalt, and Gerard Pons-Moll. Loopreg: Self-supervised learning of implicit surface correspondences, pose and shape for 3d human mesh registration. In Adv. Neural Inform. Process. Syst., 2020. 4
|
| 228 |
+
[6] Federica Bogo, Javier Romero, Matthew Loper, and Michael J. Black. FAUST: Dataset and evaluation for 3D mesh registration. In IEEE Conf. Comput. Vis. Pattern Recog. IEEE, 2014. 8
|
| 229 |
+
[7] Zhiqin Chen and Hao Zhang. Learning implicit fields for generative shape modeling. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5939-5948, 2019. 1, 2
|
| 230 |
+
[8] Julian Chibane, Thiemo Alldieck, and Gerard Pons-Moll. Implicit functions in feature space for 3d shape reconstruction and completion. In IEEE Conf. Comput. Vis. Pattern Recog. IEEE, jun 2020. 1, 2, 7, 8
|
| 231 |
+
[9] Julian Chibane, Aymen Mir, and Gerard Pons-Moll. Neural unsigned distance fields for implicit function learning. In Adv. Neural Inform. Process. Syst., December 2020. 2
|
| 232 |
+
[10] Julian Chibane and Gerard Pons-Moll. Implicit feature networks for texture completion from partial 3d data. In Eur. Conf. Comput. Vis. Worksh. Springer, August 2020. 2
|
| 233 |
+
[11] Boyang Deng, JP Lewis, Timothy Jeruzalski, Gerard Pons-Moll, Geoffrey Hinton, Mohammad Norouzi, and Andrea Tagliasacchi. Neural articulated shape approximation. In Eur. Conf. Comput. Vis. Springer, August 2020. 2, 7
|
| 234 |
+
[12] Kyle Genova, Forrester Cole, Avneesh Sud, Aaron Sarna, and Thomas Funkhouser. Local deep implicit functions for 3d shape. In IEEE Conf. Comput. Vis. Pattern Recog., pages 4857-4866, 2020. 2
|
| 235 |
+
[13] Kyle Genova, Forrester Cole, Daniel Vlasic, Aaron Sarna, William T Freeman, and Thomas Funkhouser. Learning shape templates with structured implicit functions. In Int. Conf. Comput. Vis., pages 7154-7164, 2019. 2
|
| 236 |
+
[14] Amos Gropp, Lior Yariv, Niv Haim, Matan Atzmon, and Yaron Lipman. Implicit geometric regularization for learning shapes. In Int. Conf. on Mach. Learn., pages 3569-3579. 2020. 2, 3, 5, 6
|
| 237 |
+
[15] Nils Hasler, Carsten Stoll, Martin Sunkel, Bodo Rosenhahn, and H-P Seidel. A statistical model of human pose and body shape. Comput. Graph. Forum, 28(2):337-346, 2009. 2
|
| 238 |
+
[16] N. Hesse, S. Pujades, M.J. Black, M. Arens, U. Hofmann, and S. Schroeder. Learning and tracking the 3D body shape
|
| 239 |
+
|
| 240 |
+
of freely moving infants from RGB-D sequences. IEEE Trans. Pattern Anal. Mach. Intell., 42(10):2540-2551, 2020. 1
|
| 241 |
+
[17] David A Hirshberg, Matthew Loper, Eric Rachlin, and Michael J Black. Coregistration: Simultaneous alignment and modeling of articulated 3D shape. In Eur. Conf. Comput. Vis., pages 242-255, 2012. 2
|
| 242 |
+
[18] Zeng Huang, Tianye Li, Weikai Chen, Yajie Zhao, Jun Xing, Chloe LeGendre, Linjie Luo, Chongyang Ma, and Hao Li. Deep volumetric video from very sparse multi-view performance capture. In Eur. Conf. Comput. Vis., pages 336-354, 2018. 2
|
| 243 |
+
[19] Zeng Huang, Yuanlu Xu, Christoph Lassner, Hao Li, and Tony Tung. Arch: Animatable reconstruction of clothed humans. In IEEE Conf. Comput. Vis. Pattern Recog., pages 3093-3102, 2020. 2
|
| 244 |
+
[20] Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3.6M: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE Trans. Pattern Anal. Mach. Intell., 2014. 7
|
| 245 |
+
[21] Arjun Jain, Thorsten Thormählen, Hans-Peter Seidel, and Christian Theobalt. Moviereshape: Tracking and reshaping of humans in videos. ACM Trans. Graph., 29(5), 2010. 1
|
| 246 |
+
[22] Chiyu Jiang, Avneesh Sud, Ameesh Makadia, Jingwei Huang, Matthias Nießner, and Thomas Funkhouser. Local implicit grid representations for 3d scenes. In IEEE Conf. Comput. Vis. Pattern Recog., pages 6001-6010, 2020. 2
|
| 247 |
+
[23] Hanbyul Joo, Tomas Simon, and Yaser Sheikh. Total capture: A 3D deformation model for tracking faces, hands, and bodies. In IEEE Conf. Comput. Vis. Pattern Recog., pages 8320-8329. IEEE, 2018. 1, 2
|
| 248 |
+
[24] Angjoo Kanazawa, Michael J. Black, David W. Jacobs, and Jitendra Malik. End-to-end recovery of human shape and pose. In IEEE Conf. Comput. Vis. Pattern Recog., 2018. 1
|
| 249 |
+
[25] Korrawe Karunratanakul, Jinlong Yang, Yan Zhang, Michael J Black, Krikamol Muandet, and Siyu Tang. Grasping field: Learning implicit representations for human grasps. In Int. Conf. on 3D Vis., pages 333-344. IEEE, 2020. 2
|
| 250 |
+
[26] Thomas Lewiner, Hélio Lopes, Antonio Wilson Vieira, and Geovan Tavares. Efficient implementation of marching cubes' cases with topological guarantees. Journal of Graphics Tools, 8(2):1-15, 2003. 5
|
| 251 |
+
[27] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J Black. SMPL: A skinned multiperson linear model. ACM Trans. Graph., 34(6):248:1-248:16, 2015. 1, 2
|
| 252 |
+
[28] Lars M. Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In IEEE Conf. Comput. Vis. Pattern Recog., pages 4460-4470, 2019. 1, 2
|
| 253 |
+
[29] Mateusz Michalkiewicz, Jhony K Pontes, Dominic Jack, Mahsa Baktashmotlagh, and Anders Eriksson. Deep level sets: Implicit surface representations for 3D shape inference. arXiv preprint arXiv:1901.06802, 2019. 1, 2
|
| 254 |
+
|
| 255 |
+
[30] Marko Mihajlovic, Yan Zhang, Michael J Black, and Siyu Tang. LEAP: Learning articulated occupancy of people. In IEEE Conf. Comput. Vis. Pattern Recog., 2021. 3
|
| 256 |
+
[31] Ahmed A A Osman, Timo Bolkart, and Michael J. Black. STAR: A spare trained articulated human body regressor. In Eur. Conf. Comput. Vis., 2020. 2
|
| 257 |
+
[32] Jeong Joon Park, Peter Florence, Julian Straub, Richard A. Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In IEEE Conf. Comput. Vis. Pattern Recog., pages 165-174, 2019. 1, 2, 3, 5, 7
|
| 258 |
+
[33] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed A. A. Osman, Dimitrios Tzionas, and Michael J. Black. Expressive body capture: 3D hands, face, and body from a single image. In IEEE Conf. Comput. Vis. Pattern Recog. IEEE, 2019. 2
|
| 259 |
+
[34] Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, and Andreas Geiger. Convolutional occupancy networks. In Eur. Conf. Comput. Vis., Cham, 2020. Springer International Publishing. 2
|
| 260 |
+
[35] Leonid Pishchulin, Stefanie Wuhrer, Thomas Helten, Christian Theobalt, and Bernt Schiele. Building statistical shape spaces for 3d human modeling. Pattern Recognition, 67:276-286, 2017. 2
|
| 261 |
+
[36] Ralf Plankers and Pascal Fua. Articulated soft objects for video-based body modeling. In Int. Conf. Comput. Vis., pages 394-401. IEEE, 2001. 2
|
| 262 |
+
[37] Gerard Pons-Moll, Javier Romero, Naureen Mahmood, and Michael J Black. Dyna: a model of dynamic human shape in motion. ACM Trans. Graph., 34:120, 2015. 2
|
| 263 |
+
[38] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Adv. Neural Inform. Process. Syst., pages 5099-5108, 2017. 6
|
| 264 |
+
[39] Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. In Int. Conf. on Mach. Learn., pages 5301-5310. PMLR, 2019. 4
|
| 265 |
+
[40] Prajit Ramachandran, Barret Zoph, and Quoc V Le. Searching for activation functions. arXiv preprint arXiv:1710.05941, 2017. 5
|
| 266 |
+
[41] Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, and Hao Li. Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. In Int. Conf. Comput. Vis., pages 2304-2314, 2019. 2
|
| 267 |
+
[42] Shunsuke Saito, Tomas Simon, Jason Saragih, and Hanbyul Joo. Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization. In IEEE Conf. Comput. Vis. Pattern Recog., 2020. 2
|
| 268 |
+
[43] Shunsuke Saito, Jinlong Yang, Qianli Ma, and Michael J. Black. SCANimate: Weakly supervised learning of skinned clothed avatar networks. In IEEE Conf. Comput. Vis. Pattern Recog., 2021. 3
|
| 269 |
+
[44] Vincent Sitzmann, Julien N.P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. In Adv. Neural Inform. Process. Syst., 2020. 2
|
| 270 |
+
|
| 271 |
+
[45] Cristian Sminchisescu, Atul Kanaujia, and Dimitris Metaxas. Learning joint top-down and bottom-up processes for 3D visual inference. In IEEE Conf. Comput. Vis. Pattern Recog., pages 1743-1752. IEEE, 2006. 2
|
| 272 |
+
[46] Cristian Sminchisescu and Alexandru Telea. Human pose estimation from silhouettes. a consistent approach using distance level sets. In 10th Int. Conf. on Computer Graphics, Visualization and Computer Vision, 2002. 1, 2
|
| 273 |
+
[47] Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains. Adv. Neural Inform. Process. Syst., 2020. 4
|
| 274 |
+
[48] Daniel Thalmann, Jianhua Shen, and Eric Chauvineau. Fast realistic human body deformations for animation and VR applications. In Proceedings of CG International'96, pages 166-174, 1996. 2
|
| 275 |
+
[49] Hongyi Xu, Eduard Gabriel Bazavan, Andrei Zanfir, William T Freeman, Rahul Sukthankar, and Cristian Sminchisescu. GHUM & GHUML: Generative 3d human shape and articulated pose models. In IEEE Conf. Comput. Vis. Pattern Recog., pages 6184-6193, 2020. 1, 2, 3
|
| 276 |
+
[50] Yifan Xu, Tianqi Fan, Yi Yuan, and Gurprit Singh. Ladybird: Quasi-monte carlo sampling for deep implicit field based 3d reconstruction with symmetry. In *Eur. Conf. Comput. Vis.*, 2020. 2
|
| 277 |
+
[51] Yusuke Yoshiyasu, Wan-Chun Ma, Eiichi Yoshida, and Fumio Kanehiro. As-conformal-as-possible surface registration. Comput. Graph. Forum, 2014. 5
|
| 278 |
+
[52] Andrei Zanfir, Eduard Gabriel Bazavan, Hongyi Xu, William T Freeman, Rahul Sukthankar, and Cristian Sminchisescu. Weakly supervised 3d human pose and shape reconstruction with normalizing flows. In Eur. Conf. Comput. Vis., 2020. 1, 3
|
| 279 |
+
[53] Mihai Zanfir, Elisabeta Oneata, Alin-Ionut Popa, Andrei Zanfir, and Cristian Sminchisescu. Human synthesis and scene compositing. In AAAI, pages 12749-12756, 2020. 1
|
| 280 |
+
[54] Tiancheng Zhi, Christoph Lassner, Tony Tung, Carsten Stoll, Srinivasa G. Narasimhan, and Minh Vo. TexMesh: Reconstructing detailed human texture and geometry from RGB-D video. In Eur. Conf. Comput. Vis. Springer International Publishing, 2020. 1
|
imghumimplicitgenerativemodelsof3dhumanshapeandarticulatedpose/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:309d97073b33e3d6e990de59d9d4cfa2f7209650f81c34a5b959fad8a9793ce2
|
| 3 |
+
size 295375
|
imghumimplicitgenerativemodelsof3dhumanshapeandarticulatedpose/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ff327b97feb76e17a080894da5713292cd0bde7f4ddd21a35c4fcda7e0d2fda4
|
| 3 |
+
size 424368
|
inasintegralnasfordeviceawaresalientobjectdetection/b7e80f1c-cf18-43db-a58d-e3d942282fd8_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a44f5f3e4c7148c0f270ea6d05ab60cbbddc306b7eb308823d4d2a1770b4c144
|
| 3 |
+
size 82446
|
inasintegralnasfordeviceawaresalientobjectdetection/b7e80f1c-cf18-43db-a58d-e3d942282fd8_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2691614d23c3f31d416d237abb9b7ebf91043ebcbb114ed145b6e72f4e653911
|
| 3 |
+
size 112522
|
inasintegralnasfordeviceawaresalientobjectdetection/b7e80f1c-cf18-43db-a58d-e3d942282fd8_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d49b813f5c202634e7f0def1461fd826f3a6d083dcd3b2b322f70cf8e91aa3d6
|
| 3 |
+
size 1977215
|
inasintegralnasfordeviceawaresalientobjectdetection/full.md
ADDED
|
@@ -0,0 +1,336 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# iNAS: Integral NAS for Device-Aware Salient Object Detection
|
| 2 |
+
|
| 3 |
+
Yu-Chao Gu $^{1}$ Shang-Hua Gao $^{1}$ Xu-Sheng Cao $^{1}$ Peng Du $^{2}$ Shao-Ping Lu $^{1}$ Ming-Ming Cheng $^{1*}$ $^{1}$ TKLNDST, CS, Nankai University $^{2}$ Huawei Technologies https://mmcheng.net/inas/
|
| 4 |
+
|
| 5 |
+
# Abstract
|
| 6 |
+
|
| 7 |
+
Existing salient object detection (SOD) models usually focus on either backbone feature extractors or saliency heads, ignoring their relations. A powerful backbone could still achieve sub-optimal performance with a weak saliency head and vice versa. Moreover, the balance between model performance and inference latency poses a great challenge to model design, especially when considering different deployment scenarios. Considering all components in an integral neural architecture search (iNAS) space, we propose a flexible device-aware search scheme that only trains the SOD model once and quickly finds high-performance but low-latency models on multiple devices. An evolution search with latency-group sampling (LGS) is proposed to explore the entire latency area of our enlarged search space. Models searched by iNAS achieve similar performance with SOTA methods but reduce the $3.8 \times$ , $3.3 \times$ , $2.6 \times$ , $1.9 \times$ latency on Huawei Nova6 SE, Intel Core CPU, the Jetson Nano, and Nvidia Titan Xp. The code is released at https://mmcheng.net/inas/.
|
| 8 |
+
|
| 9 |
+
# 1. Introduction
|
| 10 |
+
|
| 11 |
+
Salient object detection (SOD) aims to segment the most attractive objects in images [1, 59]. Served as a preprocessing step, SOD is required by many downstream applications, i.e., image editing [8], image retrieval [22], visual tracking [24], and video object segmentation [20]. These applications often require the SOD model to be deployed with low inference latency on multiple devices, i.e., GPUs, CPUs, mobile phones, and embedded devices. Each device has unique properties. For instance, GPUs are good at massively parallel computing [43] while the embedded devices are energy-friendly at the cost of a low computing budget [27]. Thus, different deployment scenarios require quite different designs of SOD models.
|
| 12 |
+
|
| 13 |
+
State-of-the-art (SOTA) SOD methods mostly design handcraft saliency heads [36, 44, 47, 78, 81] to aggregate multi-level features from the pre-trained backbone,
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
Figure 1. Mobile latency and performance comparison between our iNAS and recent state-of-the-art SOD models.
|
| 17 |
+
|
| 18 |
+
e.g., VGG [51], ResNet [23] and Res2Net [13]. The prohibitive inference latency often prevents them from being applied on other devices except for GPUs. On the other hand, handcraft low-latency SOD models designed for resource-constrained scenarios [16,46] suffer from large performance drop. It causes heavy workloads to manually design SOD models for different devices because of the dilemma between model performance and inference latency. Therefore, we aim at a device-aware search scheme to quickly find suitable low-latency SOD models on multiple devices.
|
| 19 |
+
|
| 20 |
+
There are several obstacles to achieve low-latency SOD models on different devices, as shown in Fig. 2. Firstly, the relative latency of operators varies among different devices due to different parallel computation abilities, IO bottlenecks, and implementations. Transfer the SOD model designed for one device to another would result in sub-optimal latency and performance. Secondly, conventional handcraft SOD models design more powerful saliency heads [36, 44, 47, 81] or more efficient backbones [16, 46], while ignoring their relations. Similarly, most neural architecture search (NAS) methods focus on the backbone for the classification task [35, 53] or incorporate a fixed segmentation head [33, 34] while ignoring the backbone and head relationship. We observe that a powerful backbone achieves
|
| 21 |
+
|
| 22 |
+

|
| 23 |
+
Design Space
|
| 24 |
+
Deployment Devices
|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
Figure 2. iNAS unifies backbone and head design into an integral design space and specializes low-latency SOD models for different devices.
|
| 28 |
+
|
| 29 |
+
sub-optimal efficiency with a weak saliency head and vice versa. These obstacles prevent the community from designing device-aware low-latency SOD models either with handcraft or NAS schemes.
|
| 30 |
+
|
| 31 |
+
To deal with these problems, we propose a device-aware search scheme with an integral search space to train the model once and quickly find high-performance but low-latency SOD models on multiple devices. Specifically, we propose an integral search space for SOD models that holistically consider the backbone and saliency head. To meet multi-scale requirements of SOD models while avoiding the latency increased by multi-branch structures, we construct a searchable multi-scale unit (SMSU). The SMSU supports searchable parallel convolutions with different kernel sizes, and reparameterizes searched multi-branch convolutions to one branch for low inference latency. We also generalize handcraft saliency heads [25, 36, 41, 44, 75] into searchable transport and decoder parts, resulting in a rich saliency head search space for cooperating with the backbone space.
|
| 32 |
+
|
| 33 |
+
With multi-scale architectures, the proposed integral SOD search space is significantly larger than NAS spaces for the classification task [2, 72]. After training the one-shot supernet, previous methods adopt evolution search with uniform sampling [2, 21, 72] to explore the search space. Uniform sampling can ensure different architecture choices within one layer have equal sampling probability. However, the overall latency of sampled models obeys a multinomial distribution, which causes extremely low-latency or extremely high-latency areas to be under-sampled. This imbalance sampling problem prevents uniform sampling from exploring the entire latency area of our enlarged search space. To overcome this imbalance sampling problem, we propose a latency-group sampling (LGS) that introduces the device latency to guide sampling. Dividing the layer-wise search space into several latency groups, and aggregating samples in specific latency groups, LGS preserves the off
|
| 34 |
+
|
| 35 |
+
spring in the under-sampled area but controls the samples of the over-sampled area. Compared with uniform sampling, the evolution search with LGS can explore the entire integral search space and finds a group of models on a higher and wider Pareto frontier.
|
| 36 |
+
|
| 37 |
+
The main contributions of this paper are:
|
| 38 |
+
|
| 39 |
+
- An integral SOD search space that considers the backbone-head relation and covers existing SOTA handcraft SOD designs.
|
| 40 |
+
- A device-aware evolution search with latency-group sampling for exploring the entire latency area of the proposed search space.
|
| 41 |
+
- A thorough evaluation of the iNAS on five popular SOD datasets. Our method can reach a similar performance with handcraft SOTA methods but largely reduces inference latency on different devices, which helps to scale up the application of SOD to different deployment scenarios.
|
| 42 |
+
|
| 43 |
+
# 2. Related Work
|
| 44 |
+
|
| 45 |
+
# 2.1. Salient Object Detection.
|
| 46 |
+
|
| 47 |
+
Traditional SOD methods [1, 6, 55, 83] mainly rely on handcraft features and heuristic priors. [28, 29, 79] make an early attempt to use convolution neural networks (CNNs) to extract patch-level features. Inspired by FCN [41], the recent SOD methods [39, 57, 60] formulate SOD as a pixelwise prediction task, which achieves large improvement over traditional or CNN-based methods. We refer readers to comprehensive surveys [1, 59, 82].
|
| 48 |
+
|
| 49 |
+
Most of the SOD methods handcraft the saliency head to effectively fuse the multi-scale information of the multilevel feature extracted by the pre-trained backbone [14,23, 51], e.g., ResNet [23]. These methods [4, 17, 25, 36, 38, 58, 67] inherit an encoder-decoder structure, in which the decoder is responsible for the bottom-up feature fusion. Transport layers [12, 44, 74, 75, 80] are included inside the saliency head, enabling both the bottom-up and top-down feature fusion. Methods that introduce edge cues into the saliency head for precise boundary refinement [30, 62, 78] are orthogonal to our search space.
|
| 50 |
+
|
| 51 |
+
The gradually complicated SOD models bring improvements in performance steadily while increasing prohibitive inference latency. Recent works [16, 20, 63, 64, 81] try to design lightweight models to eliminate the large inference latency. Among them, CPD [64] and ITSD [81] design lightweight saliency heads, achieving fast speed on CPUs and GPUs, respectively. CSNet [16] designs a light SOD backbone to achieve the low-latency on the mobile phone and embedded device. However, separating the design and the deployment devices causes sub-optimal latency when the hardware characteristics are quite different.
|
| 52 |
+
|
| 53 |
+
In this work, we introduce an integral search space that
|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
Figure 3. The designs of recent handcraft SOD models and the proposed integral search space.
|
| 57 |
+
|
| 58 |
+
<table><tr><td colspan="7">Backbone</td><td colspan="3">Transport</td><td colspan="3">Decoder</td></tr><tr><td>Stage</td><td>Operator</td><td>Resolutions</td><td>Channels</td><td>Layers</td><td>Kernel</td><td>Level</td><td>Kernel</td><td>Fusions</td><td>Level</td><td>Kernel</td><td>Fusions</td><td></td></tr><tr><td>stem</td><td>Conv</td><td>256x256-384x384</td><td>32-40</td><td>1</td><td>3</td><td rowspan="2">1</td><td rowspan="2">3,5,7,9</td><td rowspan="2">1-5</td><td rowspan="2">1</td><td rowspan="2">3,5,7,9</td><td rowspan="2">2-5</td><td></td></tr><tr><td>1</td><td>MBconv1</td><td>128x128-192x192</td><td>16-24</td><td>1-2</td><td>3</td><td></td></tr><tr><td>2</td><td>MBconv6</td><td>128x128-192x192</td><td>24-32</td><td>2-3</td><td>3</td><td>2</td><td>3,5,7,9</td><td>1-5</td><td>2</td><td>3,5,7,9</td><td>2-4</td><td></td></tr><tr><td>3</td><td>MBconv6</td><td>64x64-96x96</td><td>32-48</td><td>2-3</td><td>3,5,7,9</td><td>3</td><td>3,5,7,9</td><td>1-5</td><td>3</td><td>3,5,7,9</td><td>2-3</td><td></td></tr><tr><td>4</td><td>MBconv6</td><td>32x32-48x48</td><td>64-88</td><td>2-4</td><td>3,5,7,9</td><td rowspan="2">4</td><td rowspan="2">3,5,7,9</td><td rowspan="2">1-5</td><td rowspan="2">4</td><td rowspan="2">3,5,7,9</td><td rowspan="2">2</td><td></td></tr><tr><td>5</td><td>MBconv6</td><td>32x32-48x48</td><td>96-128</td><td>2-6</td><td>3,5,7,9</td><td></td></tr><tr><td>6</td><td>MBconv6</td><td>16x16-24x24</td><td>160-216</td><td>2-6</td><td>3,5,7,9</td><td rowspan="2">5</td><td rowspan="2">3,5,7,9</td><td rowspan="2">1-5</td><td rowspan="2">5</td><td rowspan="2">3,5,7,9</td><td rowspan="2">1</td><td></td></tr><tr><td>7</td><td>MBconv6</td><td>16x16-24x24</td><td>320-352</td><td>1-2</td><td>3,5,7,9</td><td></td></tr></table>
|
| 59 |
+
|
| 60 |
+
Table 1. Detailed configurations of the proposed integral search space.
|
| 61 |
+
|
| 62 |
+
covers most of the handcraft SOD designs. Based on our integral search space, we propose a device-aware search scheme, which achieves similar performance to SOTA methods but largely reduces latency on different devices.
|
| 63 |
+
|
| 64 |
+
# 2.2. Neural Architecture Search.
|
| 65 |
+
|
| 66 |
+
Neural architecture search (NAS) demonstrates its potential to design efficient networks for various tasks automatically [15, 18, 32, 34, 49, 70, 73, 76]. Early methods based on reinforcement learning [84, 85] and evolutionary algorithm [48, 65] train thousands of candidate architectures to learn a meta-controller, cost hundreds of GPU days to search. Later, differentiable NAS [19, 35] and one-shot NAS [2, 21, 72] exploit the idea of weight-sharing [45] to reduce the search cost, where the one-shot NAS decouples the supernet training and architecture search. Most one-shot NAS methods [2, 21, 72] target improving the supernet training and adopt evolution search with uniform sampling to explore the search space. However, we find uniform sampling causes an imbalance sampling problem when taking model latency into account.
|
| 67 |
+
|
| 68 |
+
Apart from the search method, the search space plays a vital role in NAS. Early methods [35,45,48,65] utilize cell-based search space, where the cell is composed of multiple searchable operations. Based on cell-based search space, Auto-deeplab [34] additionally supports searching for the macro-structure of scale transformation. In order to adapt the segmentation task, Auto-deeplab incorporates fixed parallel ASPP [3] decoders. However, the searched structures of cell-based search space have complicated branch con
|
| 69 |
+
|
| 70 |
+
nections, which is hard to be parallelized in current deep learning frameworks [52], limiting its potential to low-latency applications. Exploiting the human expert knowledge, MnasNet [53] and following works [9, 54, 61] develop a MobileNet-based [50] search space, which supports more hardware-friendly architectures than cell-based search space. However, since these methods are designed for classification tasks, it has less multi-scale representation capability and can not be directly applied to SOD.
|
| 71 |
+
|
| 72 |
+
Two design principles make iNAS different from Autodeeplab and MnasNet: 1) The integral search for all components reduces the overall inference latency; 2) The searchable multi-scale unit supports searching for multi-branch structures without additional inference latency cost. To fully explore the proposed integral search space, we propose a latency-group sampling to address the imbalance sampling problem of previous one-shot NAS methods [2, 21, 72]. Different from FairNAS [9], which aims to improve the fairness of optimizing different components in the supernet training stage, our proposed latency-group sampling hopes to explore the search space in a balanced way in the search stage.
|
| 73 |
+
|
| 74 |
+
# 3. Methodology
|
| 75 |
+
|
| 76 |
+
# 3.1. Integral SOD Design Space.
|
| 77 |
+
|
| 78 |
+
The previous handcraft SOD models [1, 39, 59] are mainly based on the fixed pre-trained backbone (e.g., VGG [51] and ResNet [23]) and design saliency head to fuse the multi-level feature from the backbone. Some
|
| 79 |
+
|
| 80 |
+

|
| 81 |
+
|
| 82 |
+

|
| 83 |
+
(c) Reparameterization of multi-branch structure
|
| 84 |
+
Figure 4. Illustration of the searchable multi-scale unit (SMSU).
|
| 85 |
+
|
| 86 |
+
recent works have noticed that the pre-trained backbone accounts for most of the latency cost [16]. Instead of adopting a heavy backbone, they design lightweight backbones for SOD. However, both design strategies separate the backbone and decoder design, which hinder finding the low-latency high-performance SOD model in the integral design space. This section introduces an integral SOD design space, composed of the basic search unit (i.e., searchable multi-scale unit) in Sec. 3.1.1 and the searchable saliency head in Sec. 3.1.2.
|
| 87 |
+
|
| 88 |
+
# 3.1.1 Searchable Multi-Scale Unit.
|
| 89 |
+
|
| 90 |
+
Since previous general backbones account for most of latency cost, recent designs [16, 46] of SOD backbones replace vanilla convolution with group convolution [66] or separable convolution [50] for reducing latency. To capture multi-scale representations in images, they design several branches to encode features with different receptive fields and fuse multi-scale features. However, multi-branch structures are not hardware-friendly [50, 61, 77], which will slow down inference speed. For example, the CSNet [16] has reduced $13.4 \times$ flops of the ITSD-R [81] but only achieves similar inference latency on the GPU. We thus propose a searchable multi-scale unit (SMSU), which automatically supports finding suitable multi-scale fusions. The SMSU
|
| 91 |
+
|
| 92 |
+
enables multi-branch structures to capture multi-scale feature representation in training and adopt the reparameterization strategy [11] to fuse multiple branches into a single branch for fast inference.
|
| 93 |
+
|
| 94 |
+
We show a two-branches setting of SMSU in Fig. 4 (a,b). The SMSU can extract multi-scale feature representation with different kernel sizes. Specifically, assume there are $3 \times 3$ Conv and $5 \times 5$ Conv, we denote the depthwise convolution parameters $W_{1} \in \mathcal{R}^{C \times 1 \times 3 \times 3}$ and $W_{2} \in \mathcal{R}^{C \times 1 \times 5 \times 5}$ . The batchnorm (BN) parameters following $3 \times 3$ Conv and $5 \times 5$ Conv are denoted as $\mu_{1}, \sigma_{1}, \gamma_{1}, \beta_{1}$ and $\mu_{2}, \sigma_{2}, \gamma_{2}, \beta_{2}$ , respectively. Given an input feature $F_{in} \in \mathcal{R}^{C \times H \times W}$ , we denote the output feature as $M = F_{in} * W$ , where $^*$ is the convolution. The fusion of two branches can be denoted as
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
\begin{array}{l} F _ {o u t} ^ {(i)} = \left(M _ {1} ^ {(i)} - \mu_ {1} ^ {(i)}\right) \frac {\sigma_ {1} ^ {(i)}}{\gamma_ {1} ^ {(i)}} - \beta_ {1} ^ {(i)} \tag {1} \\ + (M _ {2} ^ {(i)} - \mu_ {2} ^ {(i)}) \frac {\sigma_ {2} ^ {(i)}}{\gamma_ {2} ^ {(i)}} - \beta_ {2} ^ {(i)}, \\ \end{array}
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
where $i$ represents $i$ -th channel. Eqn. (1) describes multiscale fusions of SMSU in the training time. In deployment, we merge the convolution weight and its following BN parameters into a single convolution, defined as
|
| 101 |
+
|
| 102 |
+
$$
|
| 103 |
+
V ^ {(i)} = \frac {\gamma^ {(i)}}{\sigma^ {(i)}} W ^ {(i)}, \quad b ^ {(i)} = - \frac {\mu^ {(i)} \gamma^ {(i)}}{\sigma^ {(i)}} + \beta^ {(i)}, \tag {2}
|
| 104 |
+
$$
|
| 105 |
+
|
| 106 |
+
where $V$ is the merged convolution weight and $b$ is the bias. Then we zero-pad the small kernel in given branches to match the size of the largest kernel. Finally, we average these two branches to get a single convolution weight and bias.
|
| 107 |
+
|
| 108 |
+
The introduced two-branches fusion can be easily extended to any branches. Thus, we enable searching for fusion kernel combinations in the SMSU. We replace the inverted bottleneck of MobileNet search space with SMSU and summarize the search space in Tab. 1.
|
| 109 |
+
|
| 110 |
+
# 3.1.2 Searchable Saliency Head.
|
| 111 |
+
|
| 112 |
+
Previous handcraft saliency head incorporates transports or decoders to fuse multi-level features from the backbone. The high-level feature provides a rough location of the salient object, and the low-level feature provides the detailed information for recovering the edge and boundary. As shown in Fig. 3, typical transport designs [44, 75] enable both bottom-up and top-down fusions of multi-level features. Our searchable transport connects to all resolution levels of the backbone. In our largest child transport, each level can aggregate features from all five resolution levels like Amulet [75], while our smallest child transport only keeps identity branches, like FCN [41]. The downsample and upsample branches are composed of $1 \times 1$ Conv-BN
|
| 113 |
+
|
| 114 |
+

|
| 115 |
+
(a) US
|
| 116 |
+
|
| 117 |
+

|
| 118 |
+
(b) LGS
|
| 119 |
+
|
| 120 |
+

|
| 121 |
+
Figure 5. Illustration of uniform sampling (US) and latency-group sampling (LGS).
|
| 122 |
+
Figure 6. Illustration of iNAS search and deployment.
|
| 123 |
+
|
| 124 |
+
and maxpool operation/bilinear interpolation. Our searchable transport covers many SOTA SOD transport designs [12, 37, 44, 80].
|
| 125 |
+
|
| 126 |
+
Unlike the transport, the decoders [25,36] only support a bottom-up prediction refinement and gradually add in low-level features to recover the boundary. Thus, we do not support top-down fusion branches in the decoder. The identity and upsample branches from the adjacent resolution level are fixed, while other branches are searchable. The largest child decoder has a similar structure to DSS [25], while the smallest child decoder is similar to FCN [41]. The searchable decoder covers many handcraft SOD decoder designs [4,38,58,64,67].
|
| 127 |
+
|
| 128 |
+
Considering that best receptive fields for multi-scale fusions may be different at different resolution levels, we use SMSU as a basic search unit in the transport and decoder. Though the multi-scale fusion is proven to be effective in the SOD, how to prune redundant fusion branches and choose appropriate fusion kernels with latency constraints is a labor-intensive work. Our proposed saliency head makes these key components searchable, automatically designed with backbone to minimize inference latency.
|
| 129 |
+
|
| 130 |
+
# 3.2. Latency-group Sampling.
|
| 131 |
+
|
| 132 |
+
Previous one-shot methods adopt evolution search with uniform sampling, which causes an imbalance sampling problem when considering model latency. As illustrated in
|
| 133 |
+
|
| 134 |
+
# Algorithm 1: Evolution Search with LGS
|
| 135 |
+
|
| 136 |
+
Input: Trained supernet, initial population size $N$ , latency lookup table (LUT), latency groups $G$ , offspring size $k$ , crossover probability $p_c$ , mutation probability $p_m$ , iteration iter.
|
| 137 |
+
|
| 138 |
+
Output: Pareto frontier of population $P$ .
|
| 139 |
+
|
| 140 |
+
1 Compute the lower-bound and upper-bound of latency (i.e., $\mathbf{LAT}_{min}^{l}$ and $\mathbf{LAT}_{max}^{l}$ ) in each layer $l$ based on LUT;
|
| 141 |
+
2 Divide the $(\mathbf{LAT}_{min}^{l},\mathbf{LAT}_{max}^{l})$ in each layer $l$ into $G$ groups;
|
| 142 |
+
3 Sample $\frac{N}{G}$ child models for each latency group $\{P_{i}|i = 1\dots G\}$
|
| 143 |
+
4 Set initial population $P = P_{1} \cup \ldots \cup P_{G}$ ;
|
| 144 |
+
5 Evaluate performance for models in $P$
|
| 145 |
+
6 for $j = 1\ldots$ iter do
|
| 146 |
+
|
| 147 |
+
7 for each $P_{i}$ do 8 $\begin{array}{r}\left\lfloor S_i\gets \operatorname {Select}\frac{k}{G}\right.\end{array}$ models from the Pareto frontier of $P_{i}$
|
| 148 |
+
9 $S = S_{1}\cup \ldots \cup S_{G};$
|
| 149 |
+
10 for each model in $S$ do
|
| 150 |
+
11 Crossover and mutate the model under probability $p_c$ and $p_m$ .
|
| 151 |
+
12 Evaluate performance for models in $S$
|
| 152 |
+
13 $P = P\cup S$
|
| 153 |
+
14 $P\gets$ Select Pareto frontier of $P$
|
| 154 |
+
15 Return $P$
|
| 155 |
+
|
| 156 |
+
Fig. 5, the whole search space is composed of layer-wise block choices. The block choices within each layer vary in latency. Suppose we uniformly sample the block layer-by-layer, the accumulated latency of overall sampled models will obey a multinomial distribution, i.e., the extremely low-latency or extremely high-latency areas are under-sampled but the middle latency area is over-sampled. To explore the entire latency area of our integral search space, we propose latency-group sampling (LGS). Given a latency lookup table (LUT), we divide the layer-wise search space into several latency groups. To get a model in specific latency group, we sample blocks within this latency group at each layer. Although samples remain imbalance within each local latency group, we can get balanced samples in global latency range if we divide adequate groups. Moreover, when selecting elite offsprings, we also keep a balance number of offsprings in different latency groups.
|
| 157 |
+
|
| 158 |
+
The general pipeline of device-aware evolution search is depicted in Fig. 6. We first build a latency lookup table (LUT) on target device. Then we perform the evolution search based on LGS. After searching, the searched model inherits the supernet weight and can be directly deployed without retraining. As shown in Algorithm. 1, the evolution search with LGS contains four stages:
|
| 159 |
+
|
| 160 |
+
<table><tr><td>Method</td><td>FLOPs (G)</td><td colspan="2">Latency (ms) GPU Embedded</td><td colspan="2">ECSSD(1000) maxF MAE Sm</td><td colspan="2">DUT-O(5168) maxF MAE Sm</td><td colspan="2">DUTS-TE(5019) maxF MAE Sm</td><td colspan="2">HKU-IS(4447) maxF MAE Sm</td><td colspan="2">PASCAL-S(850) maxF MAE Sm</td></tr><tr><td colspan="14">VGG-16/VGG-19</td></tr><tr><td>\(NLDF_{CVPR17}\ [42]\)</td><td>66.68</td><td>9.48</td><td>505.59</td><td>0.905</td><td>0.063</td><td>0.875</td><td>0.753</td><td>0.080</td><td>0.770</td><td>0.813</td><td>0.065</td><td>0.805</td><td>0.902</td></tr><tr><td>\(DSS_{CVPR17}\ [25]\)</td><td>48.75</td><td>5.85</td><td>N/A</td><td>0.921</td><td>0.052</td><td>0.882</td><td>0.781</td><td>0.063</td><td>0.790</td><td>0.825</td><td>0.056</td><td>0.812</td><td>0.916</td></tr><tr><td>\(PiCANet_{CVPR18}\ [39]\)</td><td>59.82</td><td>34.21</td><td>N/A</td><td>0.931</td><td>0.046</td><td>0.914</td><td>0.794</td><td>0.068</td><td>0.826</td><td>0.851</td><td>0.054</td><td>0.861</td><td>0.921</td></tr><tr><td>\(CPD-V_{CVPR19}\ [64]\)</td><td>24.08</td><td>3.78</td><td>266.40</td><td>0.936</td><td>0.040</td><td>0.910</td><td>0.793</td><td>0.057</td><td>0.818</td><td>0.864</td><td>0.043</td><td>0.866</td><td>0.924</td></tr><tr><td>\(ITSD-V_{CVPR20}\ [81]\)</td><td>17.08</td><td>9.97</td><td>494.93</td><td>0.939</td><td>0.040</td><td>0.914</td><td>0.807</td><td>0.063</td><td>0.829</td><td>0.876</td><td>0.042</td><td>0.877</td><td>0.927</td></tr><tr><td>\(PoolNet-V_{CVPR19}\ [36]\)</td><td>48.80</td><td>8.81</td><td>N/A</td><td>0.941</td><td>0.042</td><td>0.917</td><td>0.806</td><td>0.056</td><td>0.833</td><td>0.876</td><td>0.042</td><td>0.878</td><td>-</td></tr><tr><td>\(EGNet-V_{ICCV19}\ [78]\)</td><td>120.15</td><td>11.58</td><td>N/A</td><td>0.943</td><td>0.041</td><td>0.919</td><td>0.809</td><td>0057</td><td>0.836</td><td>0.877</td><td>0.044</td><td>0.878</td><td>0.930</td></tr><tr><td>\(MINet-V_{CVPR20}\ [44]\)</td><td>71.76</td><td>14.78</td><td>N/A</td><td>0.943</td><td>0.036</td><td>0.919</td><td>0.794</td><td>0.057</td><td>0.822</td><td>0.877</td><td>0.039</td><td>0.875</td><td>0.930</td></tr><tr><td colspan="14">ResNet-34/ResNet-101/ResNetXt-101</td></tr><tr><td>\(R3Net_{JICAII}\ [10]\)</td><td>26.19</td><td>6.70</td><td>335.14</td><td>0.934</td><td>0.040</td><td>0.910</td><td>0.795</td><td>0.063</td><td>0.817</td><td>0.831</td><td>0.057</td><td>0.835</td><td>0.916</td></tr><tr><td>\(CPD-R_{CVPR19}\ [64]\)</td><td>7.19</td><td>2.52</td><td>124.09</td><td>0.939</td><td>0.037</td><td>0.918</td><td>0.797</td><td>0.056</td><td>0.825</td><td>0.865</td><td>0.043</td><td>0.869</td><td>0.925</td></tr><tr><td>\(BASNet_{CVPR19}\ [47]\)</td><td>97.51</td><td>16.37</td><td>N/A</td><td>0.942</td><td>0.037</td><td>0.916</td><td>0.805</td><td>0.056</td><td>0.836</td><td>0.859</td><td>0.048</td><td>0.865</td><td>0.928</td></tr><tr><td>\(PoolNet-R_{CVPR19}\ [36]\)</td><td>38.17</td><td>9.13</td><td>N/A</td><td>0.944</td><td>0.039</td><td>0.921</td><td>0.808</td><td>0.056</td><td>0.836</td><td>0.880</td><td>0.040</td><td>0.883</td><td>0.932</td></tr><tr><td>\(EGNet-R_{ICCV19}\ [78]\)</td><td>120.85</td><td>12.01</td><td>N/A</td><td>0.947</td><td>0.037</td><td>0.925</td><td>0.815</td><td>0.053</td><td>0.841</td><td>0.888</td><td>0.039</td><td>0.887</td><td>0.935</td></tr><tr><td>\(MINet-R_{CVPR20}\ [44]\)</td><td>42.68</td><td>7.38</td><td>N/A</td><td>0.947</td><td>0.033</td><td>0.925</td><td>0.810</td><td>0.056</td><td>0.833</td><td>0.884</td><td>0.037</td><td>0.884</td><td>0.935</td></tr><tr><td>\(ITSD-R_{CVPR20}\ [81]\)</td><td>9.65</td><td>3.57</td><td>164.76</td><td>0.947</td><td>0.034</td><td>0.925</td><td>0.820</td><td>0.061</td><td>0.840</td><td>0.882</td><td>0.041</td><td>0.884</td><td>0.934</td></tr><tr><td colspan="14">Handcraft SOD Backbone</td></tr><tr><td>\(CSNet_{ECCV20}\ [16]\)</td><td>0.72</td><td>3.63</td><td>95.75</td><td>0.916</td><td>0.065</td><td>0.893</td><td>0.775</td><td>0.081</td><td>0.805</td><td>0.813</td><td>0.075</td><td>0.822</td><td>0.898</td></tr><tr><td>\(U^2\text{-}Net_{PR20}\ [46]\)</td><td>9.77</td><td>4.45</td><td>173.61</td><td>0.943</td><td>0.041</td><td>0.918</td><td>0.813</td><td>0.060</td><td>0.837</td><td>0.852</td><td>0.054</td><td>0.858</td><td>0.928</td></tr><tr><td colspan="14">Searched Models on Different Devices</td></tr><tr><td>iNAS(GPU)-S</td><td>0.43</td><td>1.32</td><td>48.56</td><td>0.944</td><td>0.037</td><td>0.921</td><td>0.819</td><td>0.055</td><td>0.842</td><td>0.872</td><td>0.043</td><td>0.875</td><td>0.930</td></tr><tr><td>iNAS(Embedded)-S</td><td>0.41</td><td>1.53</td><td>40.99</td><td>0.944</td><td>0.038</td><td>0.920</td><td>0.816</td><td>0.056</td><td>0.840</td><td>0.871</td><td>0.043</td><td>0.875</td><td>0.931</td></tr><tr><td>iNAS(GPU)-L</td><td>0.70</td><td>1.94</td><td>71.70</td><td>0.947</td><td>0.036</td><td>0.924</td><td>0.824</td><td>0.052</td><td>0.846</td><td>0.879</td><td>0.040</td><td>0.881</td><td>0.935</td></tr><tr><td>iNAS(Embedded)-L</td><td>0.63</td><td>2.30</td><td>63.39</td><td>0.947</td><td>0.036</td><td>0.924</td><td>0.820</td><td>0.055</td><td>0.842</td><td>0.875</td><td>0.041</td><td>0.879</td><td>0.935</td></tr></table>
|
| 161 |
+
|
| 162 |
+
Table 2. Comparison with existing SOD methods. The FLOPs and latency are measured with ${224} \times {224}$ input images. N/A means that it could not be deployed on the embedded device because of the out-of-memory error.
|
| 163 |
+
|
| 164 |
+

|
| 165 |
+
Figure 7. Speed comparison with existing SOD methods on different devices. iNAS achieves SOTA performance and consistent speedup.
|
| 166 |
+
|
| 167 |
+

|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
|
| 171 |
+
- S1: Initialization. We divide latency ranges of block choices in each layer into $G$ latency groups. We sample $N$ candidates for an initial population $\mathbf{P}$ , where each latency group has $\frac{n}{G}$ samples.
|
| 172 |
+
- S2: Selection. We select $k$ models from the Pareto frontier of $P$ into a candidate set $S$ , where each latency group contains $\frac{k}{G}$ samples.
|
| 173 |
+
- S3: Crossover. For each model in $S$ , it has a probability of $p_c$ to crossover with another model in $S$ . We allow swap the stage-wise configuration in the backbone and swap level-wise configuration in the head.
|
| 174 |
+
- S4: Mutation. For each model in $S$ , each configuration has a probability of $p_m$ to mutate. Then we merge the $S$ into the population $P$ and continue to S2 until target iterations iter.
|
| 175 |
+
|
| 176 |
+
The main difference between LGS and uniform sampling is in the initialization and selection. In the initialization step, LGS balances the samples in different latency area
|
| 177 |
+
|
| 178 |
+
while uniform sampling over-samples the middle latency area. Then in the selection step, LGS preserves a certain number of elite offsprings in different groups, which enables the evolution search to find better models in different latency areas.
|
| 179 |
+
|
| 180 |
+
# 4. Experiments
|
| 181 |
+
|
| 182 |
+
# 4.1. Implementation Details.
|
| 183 |
+
|
| 184 |
+
Details of supernet training. We implement iNAS using Pytorch [52] and Jitter [26] library. We organize the search space as a nested supernet as [2, 72]. Specifically, the weight of smaller convolution kernel copies from the center part of the larger kernel, then transformed by a fully connected layer. And also, the lower-index channels and layers are sharing. The supernet is trained on DUTS-TR for 100 epochs with ImageNet pre-training. The training batch size is set to 40. We use an Adam optimizer with a learning
|
| 185 |
+
|
| 186 |
+

|
| 187 |
+
Figure 8. Comparison between the integral and partial search.
|
| 188 |
+
|
| 189 |
+
(a) Search space exploration. B: backbone, H: head.
|
| 190 |
+
|
| 191 |
+
<table><tr><td colspan="2">Searchable</td><td colspan="2">Low Latency Arch</td><td colspan="2">High Performance Arch</td></tr><tr><td>Backbone</td><td>Head</td><td>Latency (ms)</td><td>maxF</td><td>Latency (ms)</td><td>maxF</td></tr><tr><td>X</td><td>X</td><td>45.17</td><td>0.941</td><td>45.17</td><td>0.941</td></tr><tr><td>✓</td><td>X</td><td>41.20</td><td>0.941</td><td>63.56</td><td>0.946</td></tr><tr><td>X</td><td>✓</td><td>36.20</td><td>0.940</td><td>44.30</td><td>0.942</td></tr><tr><td>✓</td><td>✓</td><td>33.06</td><td>0.944</td><td>61.24</td><td>0.947</td></tr></table>
|
| 192 |
+
|
| 193 |
+
rate of 1e-4 and the poly learning rate schedule [40]. We sample the largest, the smallest, and two middle models for each iteration and fuse their gradients to update the supernet. Following [25], we add deep supervision on the prediction of each decoder level. The supernet training costs 17 hours on four Tesla V100.
|
| 194 |
+
|
| 195 |
+
Details of search and deployment. We set the initial population size $N$ to 1000, and the latency group $G$ to 10. The evolution iteration iter is set to 20. Each selection step retains $k = 100$ offspring. The crossover and mutation probability $(p_c$ and $p_m$ ) are set to 0.2. For evaluating the performance of each child model, we copy their weight from supernet and finetune their BN parameters for 200 iterations [71]. We use the Pytorch-Mobile [52] library to build the LUT on the mobile phone. On other devices, we directly benchmark their speeds with Pytorch toolkit. The search phase costs 0.8 GPU-Days on one Tesla V100 GPU.
|
| 196 |
+
|
| 197 |
+
Dataset. The supernet is trained with the DUTS-TR dataset [56]. We conduct evaluations on five popular SOD datasets, i.e., ECSSD [68], DUT-O [69], DUTS-TE [56], HKU-IS [28], PASCAL-S [31], containing 1000, 5168, 5019, 4447, and 850 pairs of images and saliency maps.
|
| 198 |
+
|
| 199 |
+
Evaluation metrics. Following common settings [39, 47], we use MAE [7], Max F-measure $(F_{\beta})$ [6] and S-measure $(S_{m})$ [5] as the evaluation metrics to evaluate our results. Since we aim to design low-latency SOD models, the inference latency is also used as the evaluation metric.
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
Figure 9. Comparison of the evolution search with uniform sampling (US) and proposed LGS.
|
| 203 |
+
|
| 204 |
+
(b) Quantitative analysis of the integral and partial search.
|
| 205 |
+
|
| 206 |
+
<table><tr><td rowspan="2">Search Dev.</td><td colspan="4">Latency (ms)</td></tr><tr><td>GPU</td><td>CPU</td><td>Mobile</td><td>Embedded</td></tr><tr><td>GPU</td><td>1.94</td><td>48.90</td><td>397.17</td><td>71.70</td></tr><tr><td>Device-Aware</td><td>1.94</td><td>42.99</td><td>339.61</td><td>63.39</td></tr><tr><td>Latency Reduction</td><td>0%</td><td>12.1%</td><td>14.5%</td><td>10.9%</td></tr></table>
|
| 207 |
+
|
| 208 |
+
Table 3. Comparison of searching on GPU and specialized device.
|
| 209 |
+
|
| 210 |
+
# 4.2. Performance Evaluation.
|
| 211 |
+
|
| 212 |
+
Comparison to the state-of-the-art. Tab. 2 shows the comparison between our searched models and previous handcraft SOD methods. The iNAS(GPU)-L, a large model searched on GPU, requires similar FLOPs to the CSNet, but reduces $47\%$ inference latency and improves $3.1\% F_{\beta}$ on ECSSD, which suggests that FLOPs is not highly related to the inference latency. We also show the latency comparison of our searched models on different devices in Fig. 1 and Fig. 7. Our method achieves similar performance to SOTA but reduces $1.9\times$ , $3.3\times$ , $2.6\times$ , $3.8\times$ latency on GPU, CPU, embedded device, and mobile phone, respectively. Compared to the previous fastest methods, the fastest models searched by iNAS speed up $2.1\times$ , $3.7\times$ , $2.5\times$ , and $4.8\times$ on these devices. Current SOD models are mostly designed for GPU while ignoring other devices. Some ResNet-based and VGG-based methods can not even be applied to the embedded device due to the out-of-memory error. In comparison, our device-aware searched models achieve consistent latency reduction on all devices.
|
| 213 |
+
|
| 214 |
+
Device-aware search. To verify the effectiveness of device-aware search, we compare the models searched on GPU and specialized devices in Tab. 3. We benchmark the latency of the iNAS(GPU)-L on target devices. With aligned performance, models searched on specialized devices achieve $12.1\%$ , $14.5\%$ , $10.9\%$ latency reductions on CPU, mobile phone, and embedded device, respectively. This observation verifies that device-aware search can find suitable mod-
|
| 215 |
+
|
| 216 |
+

|
| 217 |
+
Figure 10. Comparison of the search space constructed by the inverted bottleneck (IB) [50] and our proposed searchable multiscale unit (SMSU).
|
| 218 |
+
|
| 219 |
+
els for target devices to reduce latency.
|
| 220 |
+
|
| 221 |
+
Integral search space. iNAS supports an integral search space for SOD. Fig. 8 verifies the importance of the integral search space. For the baseline network, we use the MobileNetV2 structure [50] as the fixed backbone and combine the Amulet transport [75] and DSS decoder [25] to form the fixed saliency head. As shown in Fig. 8 (b), the fixed baseline network csets $45.17~ms$ inference latency on the CPU and gets $94.1\%$ on ECSSD. Only enabling the searchable backbone or searchable saliency head reduces the lower-bound of latency to $41.20~ms$ $(-8.7\%)$ or $36.20~ms$ $(-19.8\%)$ with similar performance. While using the integral search space greatly reduces the lower-bound latency to $33.06~ms$ $(-26.8\%)$ but improves the performance of the fastest architecture to $94.4\%$ . Similarly, the upper-bound of performance is promoted to $94.7\%$ . Fig. 8 (a) shows the integral search space has a consistently better Pareto frontier over partial searchable space and significantly improves the handcraft structure on both the latency and performance.
|
| 222 |
+
|
| 223 |
+
Latency-group sampling. Fig. 9 compares the evolution search based on uniform sampling and proposed latency-group sampling (LGS). The lower-bound and upper-bound latency of the search space is $32.12~ms$ and $74.14~ms$ , respectively. As shown in Fig. 9, the lower-bound and upper-bound latency obtained by uniform sampling are $38.55~ms$ and $59.62~ms$ , which only account for $50.2\%$ of whole latency range. While LGS ensures each latency group to have balanced samples and offsprings, thus can explore $99\%$ of the search space. As a result, our proposed LGS obtains a broader Pareto frontier over uniform sampling.
|
| 224 |
+
|
| 225 |
+
Searchable multi-scale unit. Fig. 10 verifies the effectiveness of the proposed searchable multi-scale unit (SMSU). We compare the search space constructed by SMSU with the inverted bottleneck (IB). Enhancing the IB with multiscale ability, search space constructed by SMSU shows a
|
| 226 |
+
|
| 227 |
+

|
| 228 |
+
Figure 11. Visualization of the correspondence between the backbone/head latency and the performance.
|
| 229 |
+
|
| 230 |
+
better latency-performance Pareto frontier. We observe that the improvement of higher latency models is larger, because a relaxed latency constrain enables large kernel, which supports more powerful multi-scale kernel combinations.
|
| 231 |
+
|
| 232 |
+
# 4.3. Observation
|
| 233 |
+
|
| 234 |
+
To explore the relation of performance with the backbone and head latency, we divide the backbone and the head latency into 10 groups and sample 20 models in each grid, resulting in 2000 samples. Observing Fig. 11, we find (1) a more complicated backbone consistently improves the performance; (2) while the complicated saliency head is not always the best choice. These observations show why integral search space can reduce model latency, i.e., iNAS can choose appropriate saliency heads for backbones of specific latency. Because choosing appropriate saliency heads for better latency-performance balance has no apparent pattern, searching may be an efficient solution for designing low-latency SOD models.
|
| 235 |
+
|
| 236 |
+
# 5. Conclusion
|
| 237 |
+
|
| 238 |
+
In this work, we propose an integral search (iNAS) space for SOD, which generalizes the designs of handcraft SOD models. The integral search can automatically find correspondence between backbone and head and get the best performance-latency balance. Then we propose a latency-group sampling to explore our entire integral search space. The experiment demonstrates that iNAS has similar performance to the handcraft SOTA SOD methods but largely reduces their latency in various devices. Our work paves the way for SOD applications on low-power devices.
|
| 239 |
+
|
| 240 |
+
Acknowledgement. This research was supported by NSFC (61922046), S&T innovation project from Chinese Ministry of Education, BNrst (No. BNR2020KF01001) and the Fundamental Research Funds for the Central Universities (Nankai University, No. 63213090).
|
| 241 |
+
|
| 242 |
+
# References
|
| 243 |
+
|
| 244 |
+
[1] Ali Borji, Ming-Ming Cheng, Qibin Hou, Huaizu Jiang, and Jia Li. Salient object detection: A survey. Computational Visual Media, 5(2):117-150, 2019. 1, 2, 3
|
| 245 |
+
[2] Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. Once-for-all: Train one network and specialize it for efficient deployment. In Int. Conf. Learn. Represent., 2020. 2, 3, 6
|
| 246 |
+
[3] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell., 40(4):834-848, 2017. 3
|
| 247 |
+
[4] Shuhan Chen, Xiuli Tan, Ben Wang, and Xuelong Hu. Reverse attention for salient object detection. In Eur. Conf. Comput. Vis., pages 234-250, 2018. 2, 5
|
| 248 |
+
[5] Ming-Ming Cheng and Deng-Ping Fan. Structure-measure: A new way to evaluate foreground maps. Int. J. Comput. Vis., 129(9):2622-2638, 2021. 7
|
| 249 |
+
[6] Ming-Ming Cheng, Niloy J Mitra, Xiaolei Huang, Philip HS Torr, and Shi-Min Hu. Global contrast based salient region detection. IEEE Trans. Pattern Anal. Mach. Intell., 37(3):569-582, 2015. 2, 7
|
| 250 |
+
[7] Ming-Ming Cheng, Jonathan Warrell, Wen-Yan Lin, Shuai Zheng, Vibhav Vineet, and Nigel Crook. Efficient salient region detection with soft image abstraction. In Int. Conf. Comput. Vis., pages 1529-1536, 2013. 7
|
| 251 |
+
[8] Ming-Ming Cheng, Fang-Lue Zhang, Niloy J Mitra, Xiaolei Huang, and Shi-Min Hu. Repfinder: finding approximately repeated scene elements for image editing. ACM Trans. Graph., 29(4):1-8, 2010. 1
|
| 252 |
+
[9] Xiangxiang Chu, Bo Zhang, and Ruijun Xu. Fairnas: Rethinking evaluation fairness of weight sharing neural architecture search. In Int. Conf. Learn. Represent., 2021. 3
|
| 253 |
+
[10] Zijun Deng, Xiaowei Hu, Lei Zhu, Xuemiao Xu, Jing Qin, Guoqiang Han, and Pheng-Ann Heng. R3net: Recurrent residual refinement network for saliency detection. In IJCAI, pages 684–690, 2018. 1, 6
|
| 254 |
+
[11] Xiaohan Ding, Yuchen Guo, Guiguang Ding, and Jungong Han. Acnet: Strengthening the kernel skeletons for powerful cnn via asymmetric convolution blocks. In Int. Conf. Comput. Vis., pages 1911-1920, 2019. 4
|
| 255 |
+
[12] Mengyang Feng, Huchuan Lu, and Errui Ding. Attentive feedback network for boundary-aware salient object detection. In IEEE Conf. Comput. Vis. Pattern Recog., pages 1623-1632, 2019. 2, 5
|
| 256 |
+
[13] Shang-Hua Gao, Ming-Ming Cheng, Kai Zhao, Xin-Yu Zhang, Ming-Hsuan Yang, and Philip Torr. Res2net: A new multi-scale backbone architecture. IEEE Trans. Pattern Anal. Mach. Intell., pages 1-1, 2020. 1
|
| 257 |
+
[14] Shang-Hua Gao, Qi Han, Duo Li, Pai Peng, Ming-Ming Cheng, and Pai Peng. Representative batch normalization with feature calibration. In IEEE Conf. Comput. Vis. Pattern Recog., 2021. 2
|
| 258 |
+
[15] Shang-Hua Gao, Qi Han, Zhong-Yu Li, Pai Peng, Liang Wang, and Ming-Ming Cheng. Global2local: Efficient struc
|
| 259 |
+
|
| 260 |
+
ture search for video action segmentation. In IEEE Conf. Comput. Vis. Pattern Recog., 2021. 3
|
| 261 |
+
[16] Shang-Hua Gao, Yong-Qiang Tan, Ming-Ming Cheng, Chengze Lu, Yunpeng Chen, and Shuicheng Yan. Highly efficient salient object detection with 100k parameters. In Eur. Conf. Comput. Vis., 2020. 1, 2, 4, 6
|
| 262 |
+
[17] Yanliang Ge, Cong Zhang, Kang Wang, Ziqi Liu, and Hongbo Bi. Wgi-net: A weighted group integration network for rgb-d salient object detection. Computational Visual Media, 7(1):115-125, 2021. 2
|
| 263 |
+
[18] Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V Le. Nas-fpn: Learning scalable feature pyramid architecture for object detection. In IEEE Conf. Comput. Vis. Pattern Recog., pages 7036-7045, 2019. 3
|
| 264 |
+
[19] Yu-Chao Gu, Li-Juan Wang, Yun Liu, Yi Yang, Yu-Huan Wu, Shao-Ping Lu, and Ming-Ming Cheng. Dots: Decoupling operation and topology in differentiable architecture search. In IEEE Conf. Comput. Vis. Pattern Recog., pages 12311-12320, 2021. 3
|
| 265 |
+
[20] Yu-Chao Gu, Li-Juan Wang, Zi-Qin Wang, Yun Liu, Ming-Ming Cheng, and Shao-Ping Lu. Pyramid constrained self-attention network for fast video salient object detection. In AAAI, pages 10869–10876, 2020. 1, 2
|
| 266 |
+
[21] Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. In Eur. Conf. Comput. Vis., pages 544-560, 2020. 2, 3
|
| 267 |
+
[22] Junfeng He, Jinyuan Feng, Xianglong Liu, Cheng Tao, and S F Chang. Mobile product search with bag of hash bits and boundary reranking. In IEEE Conf. Comput. Vis. Pattern Recog., 2012. 1
|
| 268 |
+
[23] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conf. Comput. Vis. Pattern Recog., pages 770-778, 2016. 1, 2, 3
|
| 269 |
+
[24] Seunghoon Hong, Tackgeun You, Suha Kwak, and Bohyung Han. Online tracking by learning discriminative saliency map with convolutional neural network. In ICML, pages 597-606, 2015. 1
|
| 270 |
+
[25] Qibin Hou, Ming-Ming Cheng, Xiaowei Hu, Ali Borji, Zhuowen Tu, and Philip Torr. Deeply supervised salient object detection with short connections. IEEE Trans. Pattern Anal. Mach. Intell., 41(4):815-828, 2019. 1, 2, 3, 5, 6, 7, 8
|
| 271 |
+
[26] Shi-Min Hu, Dun Liang, Guo-Ye Yang, Guo-Wei Yang, and Wen-Yang Zhou. Jittor: a novel deep learning framework with meta-operators and unified graph execution. Science China Information Sciences, 63(222103):1-21, 2020. 6
|
| 272 |
+
[27] Agus Kurniawan. Introduction to nvidia jetson nano, 2021. 1
|
| 273 |
+
[28] Guanbin Li and Yizhou Yu. Visual saliency based on multiscale deep features. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5455-5463, 2015. 2, 7
|
| 274 |
+
[29] Guanbin Li and Yizhou Yu. Deep contrast learning for salient object detection. In IEEE Conf. Comput. Vis. Pattern Recog., pages 478-487, 2016. 2
|
| 275 |
+
[30] Xin Li, Fan Yang, Hong Cheng, Wei Liu, and Dinggang Shen. Contour knowledge transfer for salient object detection. In Eur. Conf. Comput. Vis., pages 355-370, 2018. 2
|
| 276 |
+
|
| 277 |
+
[31] Yin Li, Xiaodi Hou, Christof Koch, James M Rehg, and Alan L Yuille. The secrets of salient object segmentation. In IEEE Conf. Comput. Vis. Pattern Recog., pages 280-287, 2014. 7
|
| 278 |
+
[32] Yingwei Li, Xiaojie Jin, Jieru Mei, Xiaochen Lian, Linjie Yang, Cihang Xie, Qihang Yu, Yuyin Zhou, Song Bai, and Alan L Yuille. Neural architecture search for lightweight non-local networks. In IEEE Conf. Comput. Vis. Pattern Recog., pages 10297-10306, 2020. 3
|
| 279 |
+
[33] Yanwei Li, Lin Song, Yukang Chen, Zeming Li, Xiangyu Zhang, Xingang Wang, and Jian Sun. Learning dynamic routing for semantic segmentation. In IEEE Conf. Comput. Vis. Pattern Recog., pages 8553-8562, 2020. 1
|
| 280 |
+
[34] Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L Yuille, and Li Fei-Fei. Autodeplab: Hierarchical neural architecture search for semantic image segmentation. In IEEE Conf. Comput. Vis. Pattern Recog., pages 82–92, 2019. 1, 3
|
| 281 |
+
[35] Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. In Int. Conf. Learn. Represent., 2019. 1, 3
|
| 282 |
+
[36] Jiang-Jiang Liu, Qibin Hou, Ming-Ming Cheng, Jiashi Feng, and Jianmin Jiang. A simple pooling-based design for real-time salient object detection. In IEEE Conf. Comput. Vis. Pattern Recog., pages 3917-3926, 2019. 1, 2, 3, 5, 6
|
| 283 |
+
[37] Jiang-Jiang Liu, Zhi-Ang Liu, and Ming-Ming Cheng. Centralized information interaction for salient object detection. arXiv preprint arXiv:2012.11294, 2020. 5
|
| 284 |
+
[38] Nian Liu and Junwei Han. DHSNet: Deep hierarchical saliency network for salient object detection. In IEEE Conf. Comput. Vis. Pattern Recog., pages 678-686, 2016. 2, 5
|
| 285 |
+
[39] Nian Liu, Junwei Han, and Ming-Hsuan Yang. PiCANet: Learning pixel-wise contextual attention for saliency detection. In IEEE Conf. Comput. Vis. Pattern Recog., pages 3089-3098, 2018. 2, 3, 6, 7
|
| 286 |
+
[40] Yun Liu, Yu-Chao Gu, Xin-Yu Zhang, Weiwei Wang, and Ming-Ming Cheng. Lightweight salient object detection via hierarchical visual perception learning. IEEE TCYB, 2020. 7
|
| 287 |
+
[41] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In IEEE Conf. Comput. Vis. Pattern Recog., pages 3431-3440, 2015. 2, 3, 4, 5
|
| 288 |
+
[42] Zhiming Luo, Akshaya Kumar Mishra, Andrew Achkar, Justin A Eichel, Shaozi Li, and Pierre-Marc Jodoin. Non-local deep features for salient object detection. In IEEE Conf. Comput. Vis. Pattern Recog., pages 6609-6617, 2017. 1, 6
|
| 289 |
+
[43] NVIDIA, Péter Vingelmann, and Frank H.P. Fitzek. Cuda, release: 10.2.89, 2020. 1
|
| 290 |
+
[44] Youwei Pang, Xiaoqi Zhao, Lihe Zhang, and Hutchuan Lu. Multi-scale interactive network for salient object detection. In IEEE Conf. Comput. Vis. Pattern Recog., pages 9413-9422, 2020. 1, 2, 3, 4, 5, 6
|
| 291 |
+
[45] Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. Efficient neural architecture search via parameters sharing. In ICML, pages 4095-4104, 2018. 3
|
| 292 |
+
[46] Xuebin Qin, Zichen Zhang, Chenyang Huang, Masood Dehghan, Osmar R Zaiane, and Martin Jagersand. U2-net: Go
|
| 293 |
+
|
| 294 |
+
ing deeper with nested u-structure for salient object detection. Pattern Recognition, 106:107404, 2020. 1, 4, 6
|
| 295 |
+
[47] Xuebin Qin, Zichen Zhang, Chenyang Huang, Chao Gao, Masood Dehghan, and Martin Jagersand. BASNet: Boundary-aware salient object detection. In IEEE Conf. Comput. Vis. Pattern Recog., pages 7479-7489, 2019. 1, 6, 7
|
| 296 |
+
[48] Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for image classifier architecture search. In AAAI, volume 33, pages 4780-4789, 2019. 3
|
| 297 |
+
[49] Tonmoy Saikia, Yassine Marrakchi, Arber Zela, Frank Hutter, and Thomas Brox. Autodispnet: Improving disparity estimation with automl. In Int. Conf. Comput. Vis., October 2019. 3
|
| 298 |
+
[50] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhoginov, and Liang-Chieh Chen. *Mobilenetv2: Inverted residuals and linear bottlenecks*. In *IEEE Conf. Comput. Vis. Pattern Recog.*, June 2018. 3, 4, 8
|
| 299 |
+
[51] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Int. Conf. Learn. Represent., 2015. 1, 2, 3
|
| 300 |
+
[52] Benoit Steiner, Zachary DeVito, Soumith Chintala, Sam Gross, Adam Paszke, Francisco Massa, Adam Lerer, Gregory Chanan, Zeming Lin, Edward Yang, et al. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inform. Process. Syst., 32:8026-8037, 2019. 3, 6, 7
|
| 301 |
+
[53] Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. Mnasnet: Platform-aware neural architecture search for mobile. In IEEE Conf. Comput. Vis. Pattern Recog., pages 2820-2828, 2019. 1, 3
|
| 302 |
+
[54] Alvin Wan, Xiaoliang Dai, Peizhao Zhang, Zijian He, Yuandong Tian, Saining Xie, Bichen Wu, Matthew Yu, Tao Xu, Kan Chen, et al. Fbnetv2: Differentiable neural architecture search for spatial and channel dimensions. In IEEE Conf. Comput. Vis. Pattern Recog., pages 12965-12974, 2020. 3
|
| 303 |
+
[55] Jingdong Wang, Huaizu Jiang, Zejian Yuan, Ming-Ming Cheng, Xiaowei Hu, and Nanning Zheng. Salient object detection: A discriminative regional feature integration approach. Int. J. Comput. Vis., 123(2):251-268, 2017. 2
|
| 304 |
+
[56] Lijun Wang, Huchuan Lu, Yifan Wang, Mengyang Feng, Dong Wang, Baocai Yin, and Xiang Ruan. Learning to detect salient objects with image-level supervision. In IEEE Conf. Comput. Vis. Pattern Recog., pages 136-145, 2017. 7
|
| 305 |
+
[57] Linzhao Wang, Lijun Wang, Huchuan Lu, Pingping Zhang, and Xiang Ruan. Saliency detection with recurrent fully convolutional networks. In Eur. Conf. Comput. Vis., pages 825-841, 2016. 2
|
| 306 |
+
[58] Tiantian Wang, Lihe Zhang, Shuo Wang, Huchuan Lu, Gang Yang, Xiang Ruan, and Ali Borji. Detect globally, refine locally: A novel approach to saliency detection. In IEEE Conf. Comput. Vis. Pattern Recog., pages 3127-3135, 2018. 2, 5
|
| 307 |
+
[59] Wenguan Wang, Qiuxia Lai, Huazhu Fu, Jianbing Shen, Haibin Ling, and Ruigang Yang. Salient object detection in the deep learning era: An in-depth survey. IEEE Trans. Pattern Anal. Mach. Intell., 2021. 1, 2, 3
|
| 308 |
+
|
| 309 |
+
[60] Wenguan Wang, Jianbing Shen, Ming-Ming Cheng, and Ling Shao. An iterative and cooperative top-down and bottom-up inference network for salient object detection. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5968-5977, 2019. 2
|
| 310 |
+
[61] Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In IEEE Conf. Comput. Vis. Pattern Recog., pages 10734-10742, 2019. 3, 4
|
| 311 |
+
[62] Runmin Wu, Mengyang Feng, Wenlong Guan, Dong Wang, Huchuan Lu, and Errui Ding. A mutual learning method for salient object detection with intertwined multi-supervision. In IEEE Conf. Comput. Vis. Pattern Recog., pages 8150-8159, 2019. 2
|
| 312 |
+
[63] Yu-Huan Wu, Yun Liu, Jun Xu, Jia-Wang Bian, Yuchao Gu, and Ming-Ming Cheng. Mobilesal: Extremely efficient rgb-d salient object detection. arXiv preprint arXiv:2012.13095, 2020. 2
|
| 313 |
+
[64] Zhe Wu, Li Su, and Qingming Huang. Cascaded partial decoder for fast and accurate salient object detection. In IEEE Conf. Comput. Vis. Pattern Recog., pages 3907-3916, 2019. 1, 2, 5, 6
|
| 314 |
+
[65] Lingxi Xie and Alan Yuille. Genetic cnn. In IEEE Conf. Comput. Vis. Pattern Recog., pages 1379-1388, 2017. 3
|
| 315 |
+
[66] Saining Xie, Ross Girshick, Piotr Dolkar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In IEEE Conf. Comput. Vis. Pattern Recog., pages 1492-1500, 2017. 4
|
| 316 |
+
[67] Yingyue Xu, Dan Xu, Xiaopeng Hong, Wanli Ouyang, Rongrong Ji, Min Xu, and Guoying Zhao. Structured modeling of joint deep feature and prediction refinement for salient object detection. In Int. Conf. Comput. Vis., pages 3789-3798, 2019. 2, 5
|
| 317 |
+
[68] Qiong Yan, Li Xu, Jianping Shi, and Jiaya Jia. Hierarchical saliency detection. In IEEE Conf. Comput. Vis. Pattern Recog., pages 1155-1162, 2013. 7
|
| 318 |
+
[69] Chuan Yang, Lihe Zhang, Huchuan Lu, Xiang Ruan, and Ming-Hsuan Yang. Saliency detection via graph-based manifold ranking. In IEEE Conf. Comput. Vis. Pattern Recog., pages 3166-3173, 2013. 7
|
| 319 |
+
[70] Lewei Yao, Hang Xu, Wei Zhang, Xiaodan Liang, and Zhenguo Li. Sm-nas: structural-to-modular neural architecture search for object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 12661-12668, 2020. 3
|
| 320 |
+
[71] Jiahui Yu and Thomas S Huang. Universally slimmable networks and improved training techniques. In Int. Conf. Comput. Vis., pages 1803-1811, 2019. 7
|
| 321 |
+
[72] Jiahui Yu, Pengchong Jin, Hanxiao Liu, Gabriel Bender, Pieter-Jan Kindermans, Mingxing Tan, Thomas Huang, Xiaodan Song, Ruoming Pang, and Quoc Le. Bignas: Scaling up neural architecture search with big single-stage models. In Eur. Conf. Comput. Vis., pages 702-717, 2020. 2, 3, 6
|
| 322 |
+
[73] Qihang Yu, Dong Yang, Holger Roth, Yutong Bai, Yixiao Zhang, Alan L Yuille, and Daguang Xu. C2fnas: Coarse-
|
| 323 |
+
|
| 324 |
+
to-fine neural architecture search for 3d medical image segmentation. In IEEE Conf. Comput. Vis. Pattern Recog., pages 4126-4135, 2020. 3
|
| 325 |
+
[74] Lu Zhang, Ju Dai, Hutchuan Lu, You He, and Gang Wang. A bi-directional message passing model for salient object detection. In IEEE Conf. Comput. Vis. Pattern Recog., pages 1741-1750, 2018. 2
|
| 326 |
+
[75] Pingping Zhang, Dong Wang, Huchuan Lu, Hongyu Wang, and Xiang Ruan. Amulet: Aggregating multi-level convolutional features for salient object detection. In Int. Conf. Comput. Vis., pages 202-211, 2017. 2, 3, 4, 8
|
| 327 |
+
[76] Wenqiang Zhang, Jiemin Fang, Xinggang Wang, and Wenyu Liu. Efficient pose estimation with neural architecture search. Computational Visual Media, pages 1-13, 2021. 3
|
| 328 |
+
[77] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In IEEE Conf. Comput. Vis. Pattern Recog., pages 6848-6856, 2018. 4
|
| 329 |
+
[78] Jia-Xing Zhao, Jiangjiang Liu, Den-Ping Fan, Yang Cao, Jufeng Yang, and Ming-Ming Cheng. EGNet: Edge guidance network for salient object detection. In Int. Conf. Comput. Vis., pages 8779-8788, 2019. 1, 2, 6
|
| 330 |
+
[79] Rui Zhao, Wanli Ouyang, Hongsheng Li, and Xiaogang Wang. Saliency detection by multi-context deep learning. In IEEE Conf. Comput. Vis. Pattern Recog., pages 1265-1274, 2015. 2
|
| 331 |
+
[80] Xiaoqi Zhao, Youwei Pang, Lihe Zhang, Huchuan Lu, and Lei Zhang. Suppress and balance: A simple gated network for salient object detection. In Eur. Conf. Comput. Vis., pages 35-51, 2020. 2, 5
|
| 332 |
+
[81] Huajun Zhou, Xiaohua Xie, Jian-Huang Lai, Zixuan Chen, and Lingxiao Yang. Interactive two-stream decoder for accurate and fast saliency detection. In IEEE Conf. Comput. Vis. Pattern Recog., June 2020. 1, 2, 4, 6
|
| 333 |
+
[82] Tao Zhou, Deng-Ping Fan, Ming-Ming Cheng, Jianbing Shen, and Ling Shao. RGB-d salient object detection: A survey. Computational Visual Media, 7(1):37-69, 2021. 2
|
| 334 |
+
[83] Wangjiang Zhu, Shuang Liang, Yichen Wei, and Jian Sun. Saliency optimization from robust background detection. In IEEE Conf. Comput. Vis. Pattern Recog., pages 2814-2821, 2014. 2
|
| 335 |
+
[84] Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. In Int. Conf. Learn. Represent., 2017. 3
|
| 336 |
+
[85] Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In IEEE Conf. Comput. Vis. Pattern Recog., pages 8697-8710, 2018. 3
|
inasintegralnasfordeviceawaresalientobjectdetection/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b284917b289150897ea2629ec738304fbf16b47383e6fe50e32fa1539817deb1
|
| 3 |
+
size 929209
|
inasintegralnasfordeviceawaresalientobjectdetection/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6f6c218a746fd3140f7972e40b21f7a4bf9e5c04e7e8b4ab3cabdcf9c3aaaba4
|
| 3 |
+
size 474786
|
ipokepokingastillimageforcontrolledstochasticvideosynthesis/96db01a0-9287-4a41-a8c2-26903cab6f59_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:06b2d10bd6c3f88d6f667c300b49be1dc7804684245e86605b5404b9b710c5f5
|
| 3 |
+
size 87287
|
ipokepokingastillimageforcontrolledstochasticvideosynthesis/96db01a0-9287-4a41-a8c2-26903cab6f59_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2e529fe1c0117c1b4a67c41fec61619bd40b62d7f3408f2681efda168cbc637c
|
| 3 |
+
size 111100
|
ipokepokingastillimageforcontrolledstochasticvideosynthesis/96db01a0-9287-4a41-a8c2-26903cab6f59_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0294578a19c26b50374b581aed1b073b704eadca1bdb334a1f17dfca471d34c5
|
| 3 |
+
size 7348850
|
ipokepokingastillimageforcontrolledstochasticvideosynthesis/full.md
ADDED
|
@@ -0,0 +1,332 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis
|
| 2 |
+
|
| 3 |
+
Andreas Blattmann $^{1,2}$ Timo Milbich $^{1,2}$ $^{1}$ Ludwig Maximilian University of Munich
|
| 4 |
+
|
| 5 |
+
Michael Dorkenwald² Björn Ommer¹,²
|
| 6 |
+
²IWR, Heidelberg University, Germany
|
| 7 |
+
|
| 8 |
+
# Abstract
|
| 9 |
+
|
| 10 |
+
How would a static scene react to a local poke? What are the effects on other parts of an object if you could locally push it? There will be distinctive movement, despite evident variations caused by the stochastic nature of our world. These outcomes are governed by the characteristic kinematics of objects that dictate their overall motion caused by a local interaction. Conversely, the movement of an object provides crucial information about its underlying distinctive kinematics and the interdependencies between its parts. This two-way relation motivates learning a bijective mapping between object kinematics and plausible future image sequences. Therefore, we propose iPOKE – invertible Prediction of Object Kinematics – that, conditioned on an initial frame and a local poke, allows to sample object kinematics and establishes a one-to-one correspondence to the corresponding plausible videos, thereby providing a controlled stochastic video synthesis. In contrast to previous works, we do not generate arbitrary realistic videos, but provide efficient control of movements, while still capturing the stochastic nature of our environment and the diversity of plausible outcomes it entails. Moreover, our approach can transfer kinematics onto novel object instances and is not confined to particular object classes. Our project page is available at https://bit.ly/3dJN4Lf.
|
| 11 |
+
|
| 12 |
+
# 1. Introduction
|
| 13 |
+
|
| 14 |
+
Imagine a 3-year-old standing next to a stacked pyramid of glasses in a shop. Can you sense the urge to pull one glass out—just to observe what happens. We have an inborn curiosity to understand how the world around us reacts to our actions, so we can eventually imagine and predict their outcome beforehand. This ability to predict is the prerequisite for targeted, goal-oriented interaction with our world rather than random manipulation of our environment. Once we are older, we have also learned to generalize and predict the dynamics of previously unseen objects when they are pulled or poked; and the less audacious have understood that it is often more effective to have others do daring experiments like the one above (and pay the bill) while they are learning by merely watching the outcome. While such experiments
|
| 15 |
+
|
| 16 |
+

|
| 17 |
+
Figure 1. iPOKE: Conditioned on a local poke controlling desired object motion in a static image, our invertible model learns a representation of the remaining object kinematics for arbitrary object classes. Once learned, our framework allows users to locally control intended movements while sampling diverse realistic motion for the remainder of the object and to even transfer kinematics to unseen object instances.
|
| 18 |
+
|
| 19 |
+
are not just fun to watch, they also help to imagine the many possible outcomes caused by the stochastic nature of the many factors beyond our control.
|
| 20 |
+
|
| 21 |
+
Given a single static image, how can an artificial vision system imagine, i.e. synthesize, the many possible outcomes when locally manipulating the scene? It needs to learn how a local poke affects different parts of an object and the resulting kinematics [49]. Conditioned on only the start frame and the displacement of a single pixel, we want to synthesize multiple videos, each showing the different plausible future dynamics. To render this generative, stochastic approach widely applicable, training should only require videos of objects in motion, but no ground truth information regarding the forces acting on an object such as a local poke. The representation of the kinematics should then generalize to similar objects not seen during training in contrast to instance specific models [15]. Moreover, the method should work for arbitrary objects, rather than being tuned to just a single class [1, 21]. Therefore, no prior motion model is available, but all kinematics have to be learned from the unannotated
|
| 22 |
+
|
| 23 |
+
video data. Previous work on video synthesis has mainly explored two opposing research directions: (i) uncontrolled future frame prediction [24, 10, 53, 59] synthesizing videos based on a start frame, but with no control of scene dynamics, and (ii) densely controlled video synthesis [50, 73, 77, 74] demanding tedious, per-pixel guidance how the video will evolve such as by requiring the object motion to be provided per pixel [50, 73, 77] or a future target frame [74]. Our sparsely controlled video synthesis based on few local user interactions constitutes the rarely investigated midground in between, allowing for specific but still efficient control of kinematics.
|
| 24 |
+
|
| 25 |
+
In this paper, we present a model for exercising local control over the kinematics of objects observed in an image. Indicating movements of individual object parts with a simple mouse drag provides sufficient input for our model to synthesize plausible, holistic object motion. To capture the ambiguity in the global object articulation, we learn a dedicated latent kinematic representation. The synthesis problem is then formulated as an invertible mapping between object kinematics and video sequences conditioned on the observed object manipulation. Due to its stochastic nature, our latent representation allows to sample and transfer diverse kinematic realizations fitting to the sparse local user input to then infer and synthesize plausible video sequences as shown in Fig. 1.
|
| 26 |
+
|
| 27 |
+
To evaluate our model on controlled stochastic video synthesis, we conduct quantitative and qualitative experiments on four different datasets exhibiting complex and highly articulated objects, such as humans and plants. Comparisons with the state-of-the-art in stochastic and controlled video prediction demonstrate the capability of our model to predict and synthesize plausible, diverse object articulations inferred from local user control.
|
| 28 |
+
|
| 29 |
+
# 2. Related Work
|
| 30 |
+
|
| 31 |
+
Video Synthesis. Video synthesis denotes the general task of generating novel video sequences. While some works solely focus on transferring a predefined holistic motion between objects [73] or interpolating motion between a starting and end frame [54, 79, 43, 4, 55], the most commonly addressed problem is video prediction. Given an initially observed video sequence, the goal is to infer a likely continuation into the future. To this end, proposed methods either generate a single, deterministic video sequence [76, 70, 69, 77, 6] or model the distribution over likely future sequences [23, 16, 40, 59, 53, 10, 19]. Moreover, the employed model architectures exhibit large divergence with latent RNN-based methods being the dominant modelling choice [53, 24]. However, also more complex models based on transformers [75], pixel-level autoregression [51, 40, 45, 16, 70, 10], factorization of dynamics and content [53, 24] and image warping using optical
|
| 32 |
+
|
| 33 |
+
flow [71, 44, 25] have been proposed. Despite these methods showing promising results, none of them is able to exercise control over the video generation process.
|
| 34 |
+
|
| 35 |
+
Controllable Video Synthesis. Exercising user control over the video synthesis process requires a detailed understanding of the object kinematics and interplay of the object parts. To circumvent the difficult task of learning object kinematics directly from data, Davis et al. [15] resort to fixed, linear mathematical models. Thus, they can only consider constraint oscillating motion around an object's rest state. In contrast, our model learns natural, unconstrained object kinematics from video, thus is also applicable to highly complex articulation such as those of humans. Other works rely on a low-dimensional, parametric representation e.g. keypoints to transfer motion between videos [1, 5] or to synthesize videos based on action labels [81]. Given such assumptions, these works cannot be universally applied to arbitrary object categories and allow only for coarse control compared to our fine-grained, local object manipulations. By iteratively warping single images with local sets of estimated optical flow vectors, [27] takes a first step towards sparsely controlled video generation for arbitrary object categories. However, due to the method's warping based nature, it is still not able to generate temporally coherent motion and requires optical flow guidance for each individual predicted image frame. To overcome such limitations, [6] introduces a hierarchical dynamics model, which can predict complex object dynamics controlled by a single optical flow vector in a given image, but does still not consider the natural motion ambiguity of the remaining, uncontrolled object parts. In contrast, our model learns a dedicated, stochastic kinematics representation modeling this incertitude of the object remainder and, thus, is capable of synthesizing locally controlled but also diverse object motion.
|
| 36 |
+
|
| 37 |
+
Invertible Neural Networks. Invertible neural networks (INNs) are learnable bijective functions often used to transform between two probability distributions, thus being a natural choice for addressing inverse problems [3], introspecting and explaining neural network representations [22, 32] and domain transfer [62, 63]. Typically, INNs are realized as generative normalizing flows [60, 47, 37] which have recently also found application in image [37, 58] and video synthesis [39, 19]. In this work, we use normalizing flows to learn the missing residual information, i.e. the latent object kinematics, not being determined by the by the sparse local control over part of the object motion.
|
| 38 |
+
|
| 39 |
+
# 3. Approach
|
| 40 |
+
|
| 41 |
+
Controlled video synthesis seeks to generate a plausible future video sequence $\mathbf{X} \in \mathbb{R}^{T \times H \times W \times 3}$ given an initial frame $x_0$ and a user-defined control $c$ that locally specifies part of the video dynamics,
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
\left(x _ {0}, c\right) \mapsto \boldsymbol {X} = \left[ x _ {1}, \dots , x _ {T} \right]. \tag {1}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
Figure 2. Overview of our proposed framework iPOKE for controlled video synthesis: We apply a conditional bijective transformation $\tau_{\theta}$ to learn a residual kinematics representation $r$ capturing all video information not present in the user control $c$ defining intended local object motion in an image frame $x_0$ (orange path). To retain feasible computational complexity, we pre-train a video autoencoding framework $(E, GRU, D)$ (blue path) yielding a dedicated video representation $z$ as training input for $\tau_{\theta}$ . Controlled video synthesis is achieved by sampling a residual $r$ , thus defining plausible motion for the remaining object parts not directly affected by $c$ , and generating video sequences $\hat{\mathbf{X}}$ from the resulting $z = \tau_{\theta}(r|x_0, c)$ using GRU and $D$ (black path).
|
| 49 |
+
|
| 50 |
+
Our goal is here to efficiently control video synthesis. Instead of having users tediously specify the dynamics at each pixel, e.g. by providing a dense vector field [77], $c$ should only be a very sparse signal. Thus, we assume to be provided only a local poke, the desired movement at one image location between start and end frame. The poke $c \in \mathbb{R}^4$ consequently comprises a shift, $c_{1:2}$ , at a single pixel location, $c_{3:4}$ , performed only by a simple mouse drag. Evidently, even densely defining the motion of every pixel between start and end frame does not fully define the object dynamics in between, even less so only a sparse 4D $c$ vector. Given this highly limited conditioning information, we model the distribution of all plausible future videos
|
| 51 |
+
|
| 52 |
+
$$
|
| 53 |
+
\boldsymbol {X} \sim p (\boldsymbol {X} | x _ {0}, c), \tag {2}
|
| 54 |
+
$$
|
| 55 |
+
|
| 56 |
+
thus contrasting previous work, which only yields some arbitrary, uncontrolled realization [40, 45, 16, 10]. Our main challenge is then to model the object kinematics which define how the movement of one part of an object affects the rest, thus yielding overall concerted object dynamics. As $\mathbf{X}$ is a random variable, the mapping in (1) is actually non-unique. There is a lot of residual information $r$ beyond user control, which we need to turn (1) into a unique one-to-one mapping
|
| 57 |
+
|
| 58 |
+
$$
|
| 59 |
+
(x _ {0}, c, r) \mapsto \boldsymbol {X}, \tag {3}
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
where the residual $r$ would then capture object kinematics specifying the movement of the remaining object parts given the sparse local control $c$ .
|
| 63 |
+
|
| 64 |
+
# 3.1. Invertible Controlled Video Synthesis
|
| 65 |
+
|
| 66 |
+
Seeking to find the mapping (3) we naturally arrive at a problem of stochastic video prediction. So far, the dominant approach to such problems are conditional variational autoencoder (cVAE) based models [38, 61, 65]. cVAE employs strong regularization to remove the given conditioning from the remaining data variations, thus facing a trade-off between synthesis quality and capturing all these variations [11, 83], in our case the diverse object kinematics $r$ . To avoid this, we use a conditional bijective, i.e. one-to-one, mapping $\tau$ between each residual $r$ and the corresponding video
|
| 67 |
+
|
| 68 |
+
$$
|
| 69 |
+
\boldsymbol {X} = \tau (r | x _ {0}, c) \tag {4}
|
| 70 |
+
$$
|
| 71 |
+
|
| 72 |
+
so that all plausible $X$ for a given conditioning can be synthesized. Moreover, the inverse mapping $\tau^{-1}$ allows to recover the residual kinematics for any $X$ ,
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
r = \tau^ {- 1} (\boldsymbol {X} | x _ {0}, c), \tag {5}
|
| 76 |
+
$$
|
| 77 |
+
|
| 78 |
+
which then can be considered as a random variable $r \sim p(r|x_0, c)$ , since $\tau^{-1}$ is unique and $X$ is a random variable defined in (2). To solve the conditional video synthesis task, we now show how to learn $\tau$ such that $r(i)$ indeed contains all video information not present in $(x_0, c)$ and $(ii)$ follows a distribution which can be easily sampled from.
|
| 79 |
+
|
| 80 |
+
Learning the invertible mapping $\tau$ . We equip $\tau$ with parameters $\theta$ which, by employing Eq. (5), can be learned from training videos $X$ . By the change-of-variables theorem for
|
| 81 |
+
|
| 82 |
+

|
| 83 |
+
Figure 3. Controlled stochastic video synthesis showing three video sequences for the same user control $c$ (red arrow) and randomly sampled kinematics $r$ on the PP dataset. Our model generates diverse, plausible object motion while accurately approaching the target location (red dot) for the controlled object part. Additionally, to ease comparison of the motion difference between samples, we show optical flow maps between the first and last frame of each sequence. Best viewed in video on our project page.
|
| 84 |
+
|
| 85 |
+
probability distributions, we have
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\begin{array}{l} p (\boldsymbol {X} | x _ {0}, c) = \frac {p \left(\tau_ {\theta} (r | x _ {0} , c) | x _ {0} , c\right)}{\mid \det J _ {\tau_ {\theta}} (r | x _ {0} , c) \mid} \\ = p \left(\tau_ {\theta} ^ {- 1} (\boldsymbol {X} | x _ {0}, c) | x _ {0}, c\right) \cdot | \det J _ {\tau_ {\theta} ^ {- 1}} (\boldsymbol {X} | x _ {0}, c) |. \tag {6} \\ \end{array}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
Here, $J_{\tau_{\theta}}$ denotes the Jacobian of the transformation $\tau_{\theta}$ and $|\operatorname{det}[\cdot]|$ the absolute value of the determinant. Recall that we have to ensure to learn $\tau_{\theta}$ such that $r$ contains all video information not present in $(x_0, c)$ . Effectively, this requires learning $\tau_{\theta}$ such that $r$ is independent of $(x_0, c)$ . This can be achieved by introducing some independent prior $q(r)$ and minimizing KL $[p(r|x_0, c)\| q(r)]$ , which then constitutes an upper bound on the mutual information MI $[r, (x_0, c)]$ [2, 62] as derived in Appendix D.1 and thus, indeed forces the intended independence. Moreover, by using Eq. (5) and (6), we can express KL $[p(r|x_0, c)\| q(r)]$ as a function of the training data $X$ what facilitates learning of $\tau_{\theta}$ by minimizing
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\begin{array}{l} \operatorname {K L} [ p (r | x _ {0}, c) \| q (r) ] \propto \mathbb {E} _ {\boldsymbol {X}} \left[ - \log \left(q \left(\tau_ {\theta} ^ {- 1} (\boldsymbol {X} | x _ {0}, c)\right)\right) \right. \\ \left. - \log | \det J _ {\tau_ {\theta} ^ {- 1}} (\boldsymbol {X} | x _ {0}, c) | \right]. \tag {7} \\ \end{array}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
By selecting $q(r) = \mathcal{N}(r|0,\mathbf{I})$ [38, 80] and inserting this into Eq. (7) we arrive at the simple objective function
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\begin{array}{l} \min _ {\theta} \mathcal {L} (\tau_ {\theta}, \boldsymbol {X}, x _ {0}, c) = \mathbb {E} _ {\boldsymbol {X}, x _ {0}, c} [ \| \tau_ {\theta} ^ {- 1} (\boldsymbol {X} | x _ {0}, c) \| _ {2} ^ {2} \tag {8} \\ - \log | \det J _ {\tau_ {\theta} ^ {- 1}} (\boldsymbol {X} | x _ {0}, c) | ]. \\ \end{array}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
Figure 4. Controlled stochastic video synthesis showing three video sequences for the same user control $c$ (red arrow) and randomly sampled kinematics $r$ on the iPER dataset. Our model generates diverse, plausible object motion while accurately approaching the target location (red dot) for the controlled body part. Best viewed in video on our project page.
|
| 105 |
+
|
| 106 |
+
A detailed derivation can be found in the Appendix D.2. Note, that optimizing Eq. (8) simultaneously ensures $(i)$ independence of $r$ and $(x_0, c)$ and $(ii)$ yields a generative probabilistic model as we can easily draw samples from $q(r)$ and use the conditional mapping (4) to obtain synthesized videos. Thus, our model is capable to synthesize videos in a controlled but nonetheless stochastic manner without facing the trade-off encountered in cVAE.
|
| 107 |
+
|
| 108 |
+
# 3.2. Architecture for Tractably Learning $\tau_{\theta}$
|
| 109 |
+
|
| 110 |
+
To realize the conditionally bijective nature of our mapping $\tau_{\theta}$ , we implement it as a conditional invertible neural network (cINN) [56, 18, 17, 62, 19], which requires equal dimensionality of the transformed random variables. Thus, $\mathbf{X}$ would demand $r$ to be very high dimensional, entailing infeasible computational complexity. As a remedy, we replace $\mathbf{X}$ with a compact, information-preserving video encoding $z \in \mathbb{R}^{h \times w \times d}$ , with $h \cdot w \cdot d \ll H \cdot W \cdot 3 \cdot T$ , learned by a standard sequence autoencoding framework [38] consisting of a 3D-ResNet [28] encoder $E$ , a GRU [12] for temporal enrollment in the latent space, and an image decoder $G$ to obtain video predictions $\hat{\mathbf{X}}$ . Prior to learning $\tau_{\theta}$ , we train this model to reconstruct training videos $\mathbf{X}$ by using a respective loss $\mathcal{L}_{rec}$ and additionally add static and temporal discriminators [13, 73], $\mathcal{D}_S$ and $\mathcal{D}_T$ , to increase visual and temporal coherence of $\hat{\mathbf{X}}$ , thus resulting in the objective
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
\mathcal {L} _ {a e} = \mathcal {L} _ {r e c} + \mathcal {L} _ {\mathcal {D} _ {S}} + \mathcal {L} _ {\mathcal {D} _ {T}}. \tag {9}
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
Detailed information on implementation and training can be found in the Appendix E.1. Afterwards we can learn $\tau_{\theta}$ from the compact latent video encodings $z = E(\mathbf{X})$ instead of high-dimensional videos $\mathbf{X}$ .
|
| 117 |
+
|
| 118 |
+

|
| 119 |
+
Figure 5. Controlled stochastic video synthesis showing three video sequences for the same user control $c$ (red arrow) and randomly sampled kinematics $r$ on the Human3.6m dataset. Our model generates diverse, plausible object motion while accurately approaching the target location (red dot) for the controlled body part. Best viewed in video on our project page.
|
| 120 |
+
|
| 121 |
+
So far, cINNs operating on latent representations have been implemented as a sequence of fully connected layers [32, 62, 63], thus discarding the spatial information naturally constituting visual data. However, since the conditioning $c$ describes a spatial shift of a single pixel, such architectures are not able to effectively leverage this information. To this end, we use the poke $c$ to define a two-channel map $C \in \mathbb{R}^{H \times W \times 2}$ with $C_{c_3, c_4, 1:2} = c_{1:2}$ and zeros elsewhere, and instead design a fully convolutional cINN, such that the crucial spatial information about the control location can be incorporated as best as possible. More specifically, our architecture comprises $K$ subsequently arranged cINN sub-blocks. By directly forwarding a portion $\frac{d}{K}$ of the output of each block to the final representation $r$ , we reduce memory requirements and avoid vanishing gradients for large $K$ [18, 48]. Within the $k$ -th block, we apply a series of $N_k$ masked convolutions [48], which have been shown to obtain improved expressivity compared to standard flow architectures such as coupling layers [17, 18, 37]. Finally, the conditioning information $(x_0, c)$ is separately processed by two dedicated encoding networks $\Phi_{x_0}$ and $\Phi_c$ , yielding representations of the same spatial size than the flow input, to which they are concatenated before each masked convolution. We visualize the architecture and training in Fig. 2 and provide further details in Appendix E.2.
|
| 122 |
+
|
| 123 |
+
# 3.3. Automatic Simulation of User Control
|
| 124 |
+
|
| 125 |
+
Training our model for controlled video synthesis relies on user controls $c$ and corresponding video sequences $X$ depicting natural object responses to be available. Providing sufficient amounts of such training data for every targeted object category is tedious and costly. Instead, we employ an efficient self-supervised strategy to artificially generate such
|
| 126 |
+
|
| 127 |
+

|
| 128 |
+
Figure 6. Motion Transfer on iPER: We extract the residual kinematics from a ground truth sequence (top row) and use it together with the corresponding control $c$ (red arrow) to animate an image $x_{t}$ showing similar initial object posture (second row). We also visualize a random sample from $q(r)$ for the same $(x_{t}, c)$ (bottom row), indicating that the residual kinematics representation solely contains motion information not present in $(x_{t}, c)$ (for a detailed description cf. Sec. 4.2). Best viewed in video on our project page.
|
| 129 |
+
|
| 130 |
+
interactions directly from the observed motion of a collection of cheaply available training videos $\mathbf{X}$ . To this end, we extract dense optical flow maps [29] $F\in \mathbb{R}^{H\times W\times 2}$ between the start and end frames, $x_0$ and $x_{T}$ , of $\mathbf{X}$ whose individual shift vectors can be interpreted as sparse pixel displacements $c = \{(F_{l_{n,1},l_{n,2},1},F_{l_{n,1},l_{n,2},2})\}_{n = 1}^{N_c}$ . During training, we randomly sample such simulated pokes at positions $l_{n}$ which exhibit sufficiently large motion that reliably corresponds to the foreground object. Contrasting [6], which use a similar strategy, but restrict the user control to be defined by only a single poke, we allow a user to control the degree of freedom of the object articulation by training our model on up to 5 simultaneous interactions $c$ , i.e. on a variable number $N_{c}\in [1,5]$ of local pokes. Note that for inferring user controls after training we do not require optical flow estimates, but use simple mouse drags instead.
|
| 131 |
+
|
| 132 |
+
# 4. Experiments
|
| 133 |
+
|
| 134 |
+
Subsequently, we evaluate our model for controlled stochastic video synthesis on four video datasets showing diverse and articulated object categories of humans and plants. Implementation details and video material can be found in the Appendices F and G, and on our project page.
|
| 135 |
+
|
| 136 |
+
# 4.1. Datasets
|
| 137 |
+
|
| 138 |
+
We evaluate our approach to understand and synthesize object dynamics on the following four datasets:
|
| 139 |
+
|
| 140 |
+
Poking-Plants (PP) [6] consists of 27 videos of 13 different types of pot plants. To learn a single kinematics model for all plants is notably challenging given the large variance in shape and texture of the plants. Overall, PP contains of 43k frames, from which a fifth is used as a test set and the
|
| 141 |
+
|
| 142 |
+

|
| 143 |
+
Figure 7. Understanding object kinematics: By sampling 1000 random control inputs at location $l = c_{3:4}$ for a fixed image $x_0$ we obtain varying video sequences, from which we compute motion correlations for $l$ with all remaining pixels. By mapping these correlations to the pixel space, we visualize the interplay correlation of distinct object parts, thus yielding insights about the learned kinematics.
|
| 144 |
+
|
| 145 |
+

|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
$\text{忍}$ controlled location $l$
|
| 149 |
+
|
| 150 |
+

|
| 151 |
+
|
| 152 |
+

|
| 153 |
+
|
| 154 |
+

|
| 155 |
+
|
| 156 |
+

|
| 157 |
+
|
| 158 |
+

|
| 159 |
+
|
| 160 |
+

|
| 161 |
+
|
| 162 |
+

|
| 163 |
+
|
| 164 |
+

|
| 165 |
+
|
| 166 |
+
remainder as training data.
|
| 167 |
+
|
| 168 |
+
iPER [42] contains of 30 humans with diverse styles performing various simple and complex movements. We follow the official train/test split which results in a training set of 180k frames and a test set of 49k frames.
|
| 169 |
+
|
| 170 |
+
Tai-Chi-HD [64] is a collection of 280 in-the-wild Tai-Chi videos from Youtube. We follow previous work [64] and use 252 videos for training and 28 videos for testing. Given the large variance in background and camera movement, this dataset tests the real-world applicability of our model. Since the motion between subsequent frames is often small, we skip every other frame.
|
| 171 |
+
|
| 172 |
+
Human3.6m [30] is a large scale human motion dataset with video sequences of 7 human actors performing 17 different actions. We follow previous work [76, 53, 24] by centercropping and downsampling the videos to $6.25\mathrm{Hz}$ and by using actors S1,S5,S6,S7 and S8 (600 videos) for training and actors S9 and S11 (239 videos) for testing.
|
| 173 |
+
|
| 174 |
+
# 4.2. Qualitative Evaluation
|
| 175 |
+
|
| 176 |
+
Controlled Stochastic Video Synthesis. In Fig. 3, 4 and 5 we show examples for controlled stochastic video synthesis generated by our proposed model on the PP, iPER and Human3.6m datasets. For each dataset, we show the ground-truth frames following a fixed given image $x_0$ , as well as three synthesized examples generated from a fixed user control $c$ (red arrows) and randomly sampled kinematics realizations $r \sim q(r)$ . Examples for the Tai-Chi dataset can be found in the supplemental, where we also show additional synthesized videos based on control inputs from real human users and demonstrate our model to also plausibly react to different pokes at the same location. The individual videos are discussed in the Appendices A and B.
|
| 177 |
+
|
| 178 |
+
Transfer of Kinematics. Besides sampling plausible object kinematics, we can also apply our model to transfer the kinematics inferred from a source sequence $\mathbf{X}_s = [x_{s,0},\ldots ,x_{s,T}]$ to a novel object instance. To this end, we extract the corresponding residual kinematics $r_s = \tau_\theta^{-1}(\mathbf{X}_s|x_{s,0},c)$ for a user control $c$ simulated based on $\mathbf{X}_s$ and use Eq. (4) to animate a target image $x_t$ show
|
| 179 |
+
|
| 180 |
+
ing another object instance than $x_{s,0}$ with similar articulation. The resulting successfully transferred motion sequence $\hat{\pmb{X}}_t = \tau_\theta (r_s|x_t,c)$ is shown in Fig. 6 (second row) and compared to $\pmb{X}_{s}$ (top row). It can be seen, that the motion contained in $\pmb{X}_{s}$ is transferred to $\hat{\pmb{X}}_t$ but not the object appearance shown in $x_{0,s}$ , indicating that our model indeed has learned a residual representation $r$ solely containing kinematics. Additionally, we visualize a synthesized video sequence based on a random sample $r\sim q(r)$ of residual kinematics for the same conditioning $(x_{t},c)$ (bottom row), showing substantially different object motion except for the controlled body part and thereby providing evidence that $r$ is also independent of the user-control $c$ . More results of kinematics transfer can be found in Appendix A.2.
|
| 181 |
+
|
| 182 |
+
Understanding Object Kinematics. To demonstrate how well our model captures holistic object kinematics we analyze its understanding of the interplay of integral object parts. Therefore, we measure the pixel-wise correlations when applying 1000 randomly sampled user controls $c$ at a fixed location $l$ of a fixed image $x_0$ , i.e. varying only direction and magnitude of the shift vector. To measure the correlation in motion of all pixels with respect to the fixed control location (and thus the remaining object parts with the controlled part), we first compute optical flow maps between the start frame $x_0$ and the end frame $x_T$ of all resulting synthesized video sequences. Next, we compute the shift of the tracked pixel locations in $x_T$ with respect to the interaction location $l$ , thus obtaining 1000 [magnitude, angle] representations of the individual shifts. To measure the correlation of a pixel with $l$ , we now compute the variance over these shifts. Fig. 7 illustrates the resulting correlation maps given different locations $l$ for both humans and plants. For humans, we obtain high correlations for pixels constituting a certain body parts and to parts which are naturally connected to $l$ , showing our model correctly understands the body structure. For the plant, we see pulling at locations close to the trunk (top mid and right) intuitively affects large parts of the object. Interacting with individual small leaves mostly has only little effect on the remaining object, in contrast to the pixels
|
| 183 |
+
|
| 184 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">PP [6]</td><td colspan="3">iPER [42]</td><td colspan="3">Tai-Chi [64]</td><td colspan="3">Human3.6m [30]</td></tr><tr><td>FVD ↓</td><td>LPIPS ↓</td><td>SSIM ↑</td><td>FVD ↓</td><td>LPIPS ↓</td><td>SSIM ↑</td><td>FVD ↓</td><td>LPIPS ↓</td><td>SSIM ↑</td><td>FVD ↓</td><td>LPIPS ↓</td><td>SSIM ↑</td></tr><tr><td>Hao [27]</td><td>361.51</td><td>0.16</td><td>0.72</td><td>235.08</td><td>0.11</td><td>0.88</td><td>341.79</td><td>0.12</td><td>0.78</td><td>259.92</td><td>0.10</td><td>0.93</td></tr><tr><td>Hao [27] w/ KP</td><td>-</td><td>-</td><td>-</td><td>141.07</td><td>0.04</td><td>0.93</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>II2V [6]</td><td>174.18</td><td>0.10</td><td>0.78</td><td>220.34</td><td>0.07</td><td>0.89</td><td>167.94</td><td>0.12</td><td>0.78</td><td>129.62</td><td>0.08</td><td>0.91</td></tr><tr><td>iPOKE (Ours)</td><td>63.06</td><td>0.06</td><td>0.69</td><td>77.50</td><td>0.06</td><td>0.87</td><td>100.69</td><td>0.08</td><td>0.74</td><td>119.77</td><td>0.06</td><td>0.93</td></tr></table>
|
| 185 |
+
|
| 186 |
+
Table 1. Comparison with recent methods for sparsely controlled video synthesis [27, 6].
|
| 187 |
+
|
| 188 |
+

|
| 189 |
+
Figure 8. Control accuracy: On the iPER dataset, we extract control signals $c$ based on ground truth keypoints and also estimate keypoints for the resulting synthesized videos. We only evaluate the errors with respect to those keypoints used to define $c$ . The violins show the resulting MSE distributions. The numbers are the mean errors in keypoint space indicated by the black dots. Our model outperforms the baselines of Hao et al. by a large margin and even approaches their model which is trained on keypoints. representing the leave.
|
| 190 |
+
|
| 191 |
+
# 4.3.Quantitative Evaluation
|
| 192 |
+
|
| 193 |
+
As our proposed task of controlled and stochastic video synthesis has been so far unattempted, we cannot directly compare iPOKE to previous work. To nevertheless quantitatively prove our model to reliably achieve this task, we separately compare against the current state of the art stochastic video prediction models [40, 10, 24] and sparsely controlled video synthesis approaches [27, 6]. For all competitors we used the provided pretrained models, where available, or trained the models using official code.
|
| 194 |
+
|
| 195 |
+
# Evaluation Metrics.
|
| 196 |
+
|
| 197 |
+
Motion Consistency. We evaluate the synthesis quality by using the Fréchet Video Distance [68] (FVD, lower-is-better) which is responsive to visual as well as temporal coherence and uses an I3D network [67] trained on the Kinetics [34] dataset as backbone. Unterthiner et al. [68] showed that the metric correlates well with human judgement. The FVD-scores we report are obtained from video of length 10.
|
| 198 |
+
|
| 199 |
+
Synthesis Quality. Since we have no direct means to evaluate how well iPOKE models object kinematics, we compare its synthesized videos against the groundtruth using two commonly used framewise metrics, as producing uncorrect kinematics would lead to large errors between the individual generated and groundtruth frames. We average over time and over 5 samples due to the stochastic nature of our model. As it has been shown to account for high- and low-frequency image differences and also to correlate well with human judgement, LPIPS [82] (lower-is-better) is the metric of choice for this task. Additionally, due to its wide application, we report framewise discrepancy as measured by SSIM [84]. However, as this metric compares image patches
|
| 200 |
+
|
| 201 |
+
based on the L2 distance, it is known to be deceivable by blurry predictions.
|
| 202 |
+
|
| 203 |
+
Motion Diversity. Following previous work [40, 85] we evaluate the diversity by computing mutual distances between the individual frames of different video samples (while fixing the user control) using the LPIPS [82] metric. Moreover, we also directly evaluate the diversity in the pixel space using the MSE, thus measuring low-frequency image differences.
|
| 204 |
+
|
| 205 |
+
Controllable Video Synthesis. We compare our model with the considered methods for sparsely controlled video synthesis [27, 6] on all considered datasets using LPIPS [82], SSIM [84] and FVD [68] on images of resolution of $128 \times 128$ . Note that both competing baselines are limited in that they provide no means to stochastically model the inherent ambiguity of the non-controlled object parts. Additionally, [27] lacks a dedicated dynamics model, as this method is based on a warping technique, which we describe in Appendix F, and requires more than one control inputs to reliably generate complex object articulation. Due to these limitations, our model exhibits significantly better temporal and visual consistency as indicated by the large gaps in FVD and LPIPS scores in Tab. 1. To provide a stronger baseline, we also train and evaluate the model of Hao et al. with input trajectories based on groundtruth keypoints (Hao w/KP) which are readily available for the iPER dataset and much more reliable than those based on estimated optical flow. Despite this advantage, we also outperform this baseline in FVD and generate similarly sharp image frames as indicated by comparable LPIPS scores.
|
| 206 |
+
|
| 207 |
+
Next, we use the displacements between the groundtruth keypoints of the test sequences to construct targeted user controls for each individual part of the human body. By using these manipulations as test-time inputs and estimating keypoints [66] for the resulting generated videos, we assess the targeted control accuracy by measuring the Mean Squared Error (MSE) only between those estimated and groundtruth keypoints which correspond to the poked body parts. Fig. 8 shows the resulting error distributions and means (black dots) showing that we significantly outperform Hao et al. [27] and achieve similar performance to their keypoint-based version. Thus our model allows for accurate control of body parts which are correctly moved to the intended target locations.
|
| 208 |
+
|
| 209 |
+
Stochastic Video Synthesis. To evaluate the visual quality and the diversity of generated videos we compare against recent state of the art methods for stochastic video synthesis (SVS) [40, 53, 24], each of them based on variational autoencoder (VAE). We adopt the SVS evaluation protocol and generate videos of spatial size $64 \times 64$ . Tab. 2
|
| 210 |
+
|
| 211 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">PP</td><td colspan="3">iPER [42]</td><td colspan="3">Tai-Chi [64]</td><td colspan="3">Human3.6m [30]</td></tr><tr><td>FVD ↓</td><td>DIV MSE‡ ↑</td><td>DIV LPIPS‡ ↑</td><td>FVD ↓</td><td>DIV MSE‡ ↑</td><td>DIV LPIPS‡ ↑</td><td>FVD ↓</td><td>DIV MSE‡ ↑</td><td>DIV LPIPS‡ ↑</td><td>FVD ↓</td><td>DIV MSE‡ ↑</td><td>DIV LPIPS‡ ↑</td></tr><tr><td>SAVP [40]†</td><td>92.2</td><td>-</td><td>-</td><td>92.8</td><td>-</td><td>-</td><td>236.8</td><td>-</td><td>-</td><td>131.7</td><td>-</td><td>-</td></tr><tr><td>IVRNN [10]</td><td>128.3</td><td>2.52</td><td>8.23</td><td>126.0</td><td>37.66</td><td>93.26</td><td>150.2</td><td>0.34</td><td>1.65</td><td>238.6</td><td>46.45</td><td>106.71</td></tr><tr><td>SRVP [24]</td><td>171.9</td><td>110.37</td><td>225.77</td><td>274.2</td><td>53.94</td><td>164.46</td><td>268.9</td><td>30.2</td><td>16.00</td><td>140.1</td><td>93.61</td><td>224.07</td></tr><tr><td>iPOKE (Ours)</td><td>56.59</td><td>133.37</td><td>275.04</td><td>81.49</td><td>98.95</td><td>201.58</td><td>96.09</td><td>69.96</td><td>126.76</td><td>111.55</td><td>124.25</td><td>309.06</td></tr></table>
|
| 212 |
+
|
| 213 |
+
Table 2. Comparison with recent state-of-the-art in stochastic video prediction. As our model does not face trade-off between variability and synthesis quality, we obtain significantly better motion video quality and diversity scores for all considered datasets. $\dagger$ : SAVP faced mode collapse due to training instabilities caused by the two involved discriminator networks. As a consequence their model generates entirely equal outputs when sampling. Therefore, we are unable to report diversity scores for this baseline. $\ddagger$ : Reported numbers multiplied with $1e4$ .
|
| 214 |
+
|
| 215 |
+
summarizes the comparison in video quality (measured in FVD score) and sample diversity (measured using LPIPS and pixel-space MSE). All SVS methods are conditioned on two image frames directly preceding the predicted sequence if not stated otherwise. Details for training and evaluation protocols can be found in the Appendices F and G. Our method outperforms all competing approaches by large margins in both video quality and diversity. Moreover, Tab. 2 reveals that competing methods which achieve comparable FVD scores to ours, i.e. video synthesis of similar visual quality, fail in generating diverse samples and vice versa. We attribute the limited performance of these models to the discussed trade-off in synthesis quality and capturing data variations of VAE-based approaches (cf. Sec. 3.1).
|
| 216 |
+
|
| 217 |
+
Controlling Future Ambiguity. We now assess the ability of our model to control the degree of freedom in stochastic object articulation by varying the number of local pokes. Intuitively, an increasing number of user controls should result in more accurate predictions and lowered between-sample-variance due to the reduction in future ambiguity. We evaluate the amount of uncertainty in predictions by comparing average reconstruction scores of a fixed number of samples from $q(r)$ for increasing numbers of user controls. More specifically, we report the mean prediction error and standard deviation of 50 samples (Std-50s) for each of 1000 input images and pokes. On the iPER dataset this is done by measuring MSE between estimated [66] and groundtruth keypoints. For PP dataset we resort to the LPIPS metrics as keypoints are not available. The resulting curves are depicted in Fig. 9. As expected, the decreasing prediction errors and between-sample-variances indicate that our model leverages the additional future information provided by an increased number of pokes. Thus, our model not only generates diverse predictions but also provides means to control their uncertainty by choosing an appropriate number of input pokes.
|
| 218 |
+
|
| 219 |
+
Ablation Study. As the competing VAE-grounded baselines for SVS are all conditioned on observed motion in form of observed past frames rather than dedicated, local user control, we further compare our model method to a cVAE-baseline (Ours cVAE) for locally controlled video synthesis. Thus, we use the exact architecture of our video-autoencoding framework (cf. Sec. 3.2) except for our latent cINN model. To enable sampling, we realize the latent video representation $z$ as a gaussian distribution and regularize it towards a standard normal prior. The encodings obtained from the control $c$ and source image $x_0$ are concatenated with $z$ and
|
| 220 |
+
|
| 221 |
+

|
| 222 |
+
Figure 9. Controlling Future Ambiguity: On the PP (left) and iPER (right) datasets our model reduces mean prediction errors (blue) and standard deviations of a sample of 50 residual samples given the same $(x_0, c)$ for an increased number of control inputs. Thus, our approach enables users to control future ambiguity by selecting the number of control inputs.
|
| 223 |
+
|
| 224 |
+

|
| 225 |
+
|
| 226 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">PP</td><td colspan="3">Human3.6m [30]</td></tr><tr><td>FVD ↓</td><td>DIV MSE ↑</td><td>DIV LPIPS ↑</td><td>FVD ↓</td><td>DIV MSE ↑</td><td>DIV LPIPS ↑</td></tr><tr><td>Ours cVAE</td><td>70.9</td><td>3.37</td><td>7.59</td><td>269.6</td><td>83.17</td><td>210.39</td></tr><tr><td>iPOKE (Ours)</td><td>56.59</td><td>133.37</td><td>275.04</td><td>111.55</td><td>124.25</td><td>309.06</td></tr></table>
|
| 227 |
+
|
| 228 |
+
Table 3. Ablation. Comparison with a cVAE-counterpart to our cINN-based model for controlled video synthesis, indicating its superior performance due to variability vs. quality trade-off in cVAE.
|
| 229 |
+
|
| 230 |
+
constitute the hidden state for the latent GRU. A detailed architecture and training description of the baseline is contained in the Appendix F. Thus, this baseline is the exact variational counterpart of our model. We conduct ablation experiments on all the considered object categories, using the PP and Human3.6m datasets. Tab. 3, which summarizes the results, again indicates the improved performance of our invertible model compared to variational approaches.
|
| 231 |
+
|
| 232 |
+
# 5. Conclusion
|
| 233 |
+
|
| 234 |
+
We presented a novel model for controlling and synthesizing object kinematics of arbitrary object categories by locally manipulating object articulation using simple mouse drags. Our model is based on an invertible mapping between the generated video sequences and a dedicated kinematics representation learned from training videos only. To account for the ambiguity in the global object articulation given a local shift determining the motion of only an object part, learning is based on a probabilistic formulation, thus allowing us to sample and synthesize diverse kinematic realizations.
|
| 235 |
+
|
| 236 |
+
# Acknowledgements
|
| 237 |
+
|
| 238 |
+
This research is funded in part by the German Federal Ministry for Economic Affairs and Energy within the project KI-Absicherung Safe AI for automated driving and by the German Research Foundation (DFG) within project 421703927.
|
| 239 |
+
|
| 240 |
+
# References
|
| 241 |
+
|
| 242 |
+
[1] Kfir Aberman, Yijia Weng, Dani Lischinski, Daniel Cohen-Or, and Baoquan Chen. Unpaired motion style transfer from video to animation. ACM Transactions on Graphics (TOG), 39(4):64, 2020. 1, 2
|
| 243 |
+
[2] Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. Deep variational information bottleneck. CoRR, 2016. 4, 16, 17
|
| 244 |
+
[3] Lynton Ardizzone, Jakob Kruse, Carsten Rother, and Ullrich Köthe. Analyzing inverse problems with invertible neural networks. In Int. Conf. Learn. Represent., 2019. 2
|
| 245 |
+
[4] Wenbo Bao, Wei-Sheng Lai, Chao Ma, Xiaoyun Zhang, Zhiyong Gao, and Ming-Hsuan Yang. Depth-aware video frame interpolation. In IEEE Conf. Comput. Vis. Pattern Recog., pages 3703-3712, 2019. 2
|
| 246 |
+
[5] Andreas Blattmann, Timo Milbich, Michael Dorkenwald, and Bjorn Ommer. Behavior-driven synthesis of human dynamics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12236-12246, June 2021. 2
|
| 247 |
+
[6] Andreas Blattmann, Timo Milbich, Michael Dorkenwald, and Bjorn Ommer. Understanding object dynamics for interactive image-to-video synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5171-5181, June 2021. 2, 5, 7, 19
|
| 248 |
+
[7] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In Int. Conf. Learn. Represent., 2019. 17
|
| 249 |
+
[8] J. Carreira and A. Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 20
|
| 250 |
+
[9] João Carreira and Andrew Zisserman. Quo vadis, action recognition? A new model and the kinetics dataset. In IEEE Conf. Comput. Vis. Pattern Recog., pages 4724-4733, 2017. 20
|
| 251 |
+
[10] L. Castrejon, N. Ballas, and A. Courville. Improved conditional vrnns for video prediction. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019. 2, 3, 7, 8, 15, 19
|
| 252 |
+
[11] Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. Variational lossy autoencoder. In Int. Conf. Learn. Represent., 2017. 3, 20
|
| 253 |
+
[12] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling, 2014. cite arxiv:1412.3555Comment: Presented in NIPS 2014 Deep Learning and Representation Learning Workshop. 4, 17
|
| 254 |
+
[13] Aidan Clark, Jeff Donahue, and Karen Simonyan. Adversarial video generation on complex datasets, 2019. 4, 17
|
| 255 |
+
[14] Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. 18
|
| 256 |
+
|
| 257 |
+
[15] Abe Davis, Justin G. Chen, and Frédo Durand. Image-space modal bases for plausible manipulation of objects in video. ACM Trans. Graph., 2015. 1, 2
|
| 258 |
+
[16] Emily Denton and Rob Fergus. Stochastic video generation with a learned prior. In Jennifer G. Dy and Andreas Krause, editors, International Conference on Machine Learning, pages 1182-1191, 2018. 2, 3
|
| 259 |
+
[17] Laurent Dinh, David Krueger, and Yoshua Bengio. NICE: non-linear independent components estimation. In Yoshua Bengio and Yann LeCun, editors, Int. Conf. Learn. Represent., 2015. 4, 5, 18
|
| 260 |
+
[18] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real NVP. In Int. Conf. Learn. Represent., 2017. 4, 5, 18
|
| 261 |
+
[19] Michael Dorkenwald, Timo Milbich, Andreas Blattmann, Robin Rombach, Konstantinos G. Derpanis, and Bjorn Ommer. Stochastic image-to-video synthesis using cinns. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3742-3753, June 2021. 2, 4
|
| 262 |
+
[20] Alexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, editors, Adv. Neural Inform. Process. Syst., pages 658-666, 2016. 17
|
| 263 |
+
[21] Patrick Esser, Johannes Haux, Timo Milbich, and Björn Ommer. Towards learning a realistic rendering of human behavior. In ECCV Workshops, pages 409–425, 2018. 1
|
| 264 |
+
[22] Patrick Esser, Robin Rombach, and Björn Ommer. A disentangling invertible interpretation network for explaining latent representations. In IEEE Conf. Comput. Vis. Pattern Recog., pages 9220-9229, 2020. 2
|
| 265 |
+
[23] Chelsea Finn, Ian J. Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video prediction. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, editors, Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 64-72, 2016. 2
|
| 266 |
+
[24] Jean-Yves Franceschi, Edouard Delasalles, Mickael Chen, Sylvain Lamprier, and P. Gallinari. Stochastic latent residual video prediction. 2020. 2, 6, 7, 8, 13, 19
|
| 267 |
+
[25] Hang Gao, Huazhe Xu, Qi-Zhi Cai, Ruth Wang, Fisher Yu, and Trevor Darrell. Disentangling propagation and generation for video prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019. 2
|
| 268 |
+
[26] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Improved training of wasserstein gans. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Adv. Neural Inform. Process. Syst., pages 5767-5777, 2017. 17
|
| 269 |
+
[27] Zekun Hao, Xun Huang, and Serge Belongie. Controllable video generation with sparse trajectories. In IEEE Conf. Comput. Vis. Pattern Recog., 2018. 2, 7, 15, 19
|
| 270 |
+
|
| 271 |
+
[28] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conf. Comput. Vis. Pattern Recog., pages 770-778, 2016. 4, 17, 18
|
| 272 |
+
[29] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017. 5
|
| 273 |
+
[30] Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(7):1325-1339, jul 2014. 6, 7, 8, 13, 18, 19
|
| 274 |
+
[31] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. Image-to-image translation with conditional adversarial networks. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5967-5976, 2017. 18
|
| 275 |
+
[32] Jorn-Henrik Jacobsen, Arnold W. M. Smeulders, and Edouard Oyallon. i-revnet: Deep invertible networks. In Int. Conf. Learn. Represent., 2018. 2, 5
|
| 276 |
+
[33] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, Eur. Conf. Comput. Vis., pages 694–711, 2016. 17
|
| 277 |
+
[34] Will Kay, João Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, and Andrew Zisserman. The Kinetics human action video dataset. CoRR, 2017. 7
|
| 278 |
+
[35] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *Yoshua Bengio and Yann LeCun*, editors, *Int. Conf. Learn. Represent.*, 2015. 18
|
| 279 |
+
[36] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. 18, 20
|
| 280 |
+
[37] Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, 2018. 2, 5, 18
|
| 281 |
+
[38] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In Yoshua Bengio and Yann LeCun, editors, Int. Conf. Learn. Represent., 2014. 3, 4, 17, 19
|
| 282 |
+
[39] Manoj Kumar, Mohammad Babaeizadeh, Dumitru Erhan, Chelsea Finn, Sergey Levine, Laurent Dinh, and Durk Kingma. Videoflow: A conditional flow-based model for stochastic video generation. In Int. Conf. Learn. Represent., 2020. 2
|
| 283 |
+
[40] Alex X. Lee, Richard Zhang, Frederik Ebert, Pieter Abbeel, Chelsea Finn, and Sergey Levine. Stochastic adversarial video prediction. Eur. Conf. Comput. Vis., 2018. 2, 3, 7, 8, 19
|
| 284 |
+
[41] Jae Hyun Lim and Jong Chul Ye. Geometric gan, 2017. 17
|
| 285 |
+
[42] Wen Liu, Zhixin Piao, Jie Min, Wenhan Luo, Lin Ma, and Shenghua Gao. Liquid warping gan: A unified framework for 061431human motion imitation, appearance transfer and novel view synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019. 6, 7, 8, 13, 14, 18
|
| 286 |
+
|
| 287 |
+
[43] Yu-Lun Liu, Yi-Tung Liao, Yen-Yu Lin, and Yung-Yu Chuang. Deep video frame interpolation using cyclic frame generation. In Proceedings of the 33rd Conference on Artificial Intelligence (AAAI), 2019. 2
|
| 288 |
+
[44] Z. Liu, R. A. Yeh, X. Tang, Y. Liu, and A. Agarwala. Video frame synthesis using deep voxel flow. In 2017 IEEE International Conference on Computer Vision (ICCV), 2017. 2
|
| 289 |
+
[45] C. Lu, M. Hirsch, and B. Scholkopf. Flexible spatio-temporal networks for video prediction. In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, pages 2137-2145. IEEE, 2017. 2, 3
|
| 290 |
+
[46] James Lucas, G. Tucker, R. Grosse, and Mohammad Norouzi. Understanding posterior collapse in generative latent variable models. In DGS@ICLR, 2019. 20
|
| 291 |
+
[47] Andreas Lugmayr, Martin Danelljan, Luc Van Gool, and Radu Timofte. Srlflow: Learning the super-resolution space with normalizing flow. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Eur. Conf. Comput. Vis., pages 715-732, 2020. 2, 18
|
| 292 |
+
[48] Xuezhe Ma, Xiang Kong, Shanghang Zhang, and Eduard Hovy. Macow: Masked convolutional generative flow. 2019. 5, 18
|
| 293 |
+
[49] J.G. MacGregor. An Elementary Treatise on Kinematics and Dynamics. Macmillan, 1902. 1
|
| 294 |
+
[50] Arun Mallya, Ting-Chun Wang, Karan Sapra, and Ming-Yu Liu. World-consistent video-to-video synthesis. In CVPR, 2020. 2
|
| 295 |
+
[51] Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. 2016. 4th International Conference on Learning Representations, ICLR 2016. 2
|
| 296 |
+
[52] Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for gans do actually converge? In International Conference on Machine learning (ICML), 2018. 17
|
| 297 |
+
[53] Matthias Minderer, Chen Sun, Ruben Villegas, Forrester Cole, Kevin P. Murphy, and Honglak Lee. Unsupervised learning of object structure and dynamics from videos. In Adv. Neural Inform. Process. Syst., pages 92–102, 2019. 2, 6, 7, 13
|
| 298 |
+
[54] Simon Niklaus and Feng Liu. Context-aware synthesis for video frame interpolation. In IEEE Conf. Comput. Vis. Pattern Recog., pages 1701-1710, 2018. 2
|
| 299 |
+
[55] Simon Niklaus and Feng Liu. Softmax splatting for video frame interpolation. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5436-5445, 2020. 2
|
| 300 |
+
[56] George Papamakarios, Eric T. Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. CoRR, 2019. 4, 18
|
| 301 |
+
[57] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Adv. Neural Inform. Process. Syst., pages 8024-8035. 2019. 17
|
| 302 |
+
|
| 303 |
+
[58] Albert Pumarola, Stefan Popov, Francesc Moreno-Noguer, and Vittorio Ferrari. C-flow: Conditional generative flow models for images and 3d point clouds. In IEEE Conf. Comput. Vis. Pattern Recog., pages 7946-7955, 2020. 2
|
| 304 |
+
[59] Fitsum Reda, Guilin Liu, Kevin Shih, Robert Kirby, Jon Barker, David Tarjan, Andrew Tao, and Bryan Catanzaro. SDC-Net: Video Prediction Using Spatially-Displaced Convolution. 2018. 2
|
| 305 |
+
[60] Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Francis R. Bach and David M. Blei, editors, International Conference on Machine Learning, pages 1530-1538, 2015. 2, 18
|
| 306 |
+
[61] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning, pages 1278-1286, 2014. 3, 17, 19
|
| 307 |
+
[62] Robin Rombach, Patrick Esser, and Björn Ommer. Network fusion for content creation with conditional inns. CoRR, 2020. 2, 4, 5
|
| 308 |
+
[63] Robin Rombach, Patrick Esser, and Björn Ommer. Network-to-network translation with conditional invertible neural networks. In Adv. Neural Inform. Process. Syst., 2020. 2, 5, 16, 17
|
| 309 |
+
[64] Aliaksandr Siarhin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, and Nicu Sebe. First order motion model for image animation. In Adv. Neural Inform. Process. Syst., pages 7135-7145, 2019. 6, 7, 8, 13, 18
|
| 310 |
+
[65] Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems. Curran Associates, Inc., 2015. 3, 17
|
| 311 |
+
[66] Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang. Deep high-resolution representation learning for human pose estimation. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5693-5703. Computer Vision Foundation / IEEE, 2019. 7, 8
|
| 312 |
+
[67] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In IEEE Conf. Comput. Vis. Pattern Recog., pages 2818-2826, 2016. 7
|
| 313 |
+
[68] Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphaël Marinier, Marcin Michalski, and Sylvain Gelly. Towards accurate generative models of video: A new metric & challenges. CoRR, 2018. 7, 20
|
| 314 |
+
[69] Sjoerd van Steenkiste, Michael Chang, Klaus Greff, and Jrgen Schmidhuber. Relational neural expectation maximization: Unsupervised discovery of objects and their interactions. In International Conference on Learning Representations, 2018.
|
| 315 |
+
[70] Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, and Honglak Lee. Decomposing motion and content for natural video sequence prediction. In Int. Conf. Learn. Represent., 2017. 2, 13
|
| 316 |
+
[71] Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. In Proceedings of the 30th International Conference on Neural Information Processing Systems, page 613621, 2016. 2
|
| 317 |
+
[72] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image
|
| 318 |
+
|
| 319 |
+
synthesis and semantic manipulation with conditional gans. In IEEE Conf. Comput. Vis. Pattern Recog., pages 8798-8807, 2018. 17
|
| 320 |
+
[73] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Nikolai Yakovenko, Andrew Tao, Jan Kautz, and Bryan Catanzaro. Video-to-video synthesis. In Adv. Neural Inform. Process. Syst., pages 1152-1164, 2018. 2, 4, 17
|
| 321 |
+
[74] Tsun-Hsuan Wang, Yen-Chi Cheng, Chieh Hubert Lin, Hwann-Tzong Chen, and Min Sun. Point-to-point video generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019. 2
|
| 322 |
+
[75] Dirk Weissenborn, Oscar Täckström, and Jakob Uszkoreit. Scaling autoregressive video models. In Int. Conf. Learn. Represent., 2020. 2
|
| 323 |
+
[76] Nevan Wichers, Ruben Villegas, Dumitru Erhan, and Honglak Lee. Hierarchical long-term video prediction without supervision. In Jennifer G. Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, ICML, 2018. 2, 6, 13
|
| 324 |
+
[77] Yue Wu, Rongrong Gao, Jaesik Park, and Qifeng Chen. Future video synthesis with object motion prediction. In IEEE Conf. Comput. Vis. Pattern Recog., pages 5538-5547, 2020. 2, 3
|
| 325 |
+
[78] Yuxin Wu and Kaiming He. Group normalization. In Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss, editors, Eur. Conf. Comput. Vis., pages 3-19, 2018. 17, 18
|
| 326 |
+
[79] Tianfan Xue, Baian Chen, Jiajun Wu, Donglai Wei, and William T Freeman. Video enhancement with task-oriented flow. International Journal of Computer Vision (IJCV), 127(8):1106-1125, 2019. 2
|
| 327 |
+
[80] Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional image generation from visual attributes. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, Eur. Conf. Comput. Vis., pages 776-791, 2016. 4
|
| 328 |
+
[81] Ceyuan Yang, Zhe Wang, Xinge Zhu, Chen Huang, Jianping Shi, and Dahua Lin. Pose guided human video generation. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018. 2
|
| 329 |
+
[82] Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 7, 20
|
| 330 |
+
[83] Shengjia Zhao, Jiaming Song, and Stefano Ermon. Infovae: Information maximizing variational autoencoders. CoRR, 2017. 3, 20
|
| 331 |
+
[84] Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. TIP, 2004. 7
|
| 332 |
+
[85] Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A. Efros, Oliver Wang, and Eli Shechtman. Toward multimodal image-to-image translation. In Adv. Neural Inform. Process. Syst., pages 465-476, 2017. 7
|
ipokepokingastillimageforcontrolledstochasticvideosynthesis/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2cec8b5a648b237da4fb786da3bc57a6828d38095bdadd14b75e12cfbfbada7b
|
| 3 |
+
size 588761
|
ipokepokingastillimageforcontrolledstochasticvideosynthesis/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ba6f28464bad431a0b09a1c4c1cd95f5fb9c135a3efe5f9e4ec40f2b50832042
|
| 3 |
+
size 500238
|
mdalumultisourcedomainadaptationandlabelunificationwithpartialdatasets/ec3dc6da-bcdf-4075-945a-6df245e914e8_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:31b4412c022f1477e7befcdf560c484a7c25eebccd00492571798c6514c79fe6
|
| 3 |
+
size 87706
|
mdalumultisourcedomainadaptationandlabelunificationwithpartialdatasets/ec3dc6da-bcdf-4075-945a-6df245e914e8_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bb032b4bcdb04a307696633aefcc2217981cde5251055254a5b85ed96a4953d1
|
| 3 |
+
size 105534
|
mdalumultisourcedomainadaptationandlabelunificationwithpartialdatasets/ec3dc6da-bcdf-4075-945a-6df245e914e8_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f44faf30331937c2aa352f0ed3195f81d7487f7560d72609b776b7baaac8d142
|
| 3 |
+
size 8026827
|
mdalumultisourcedomainadaptationandlabelunificationwithpartialdatasets/full.md
ADDED
|
@@ -0,0 +1,353 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# mDALU: Multi-Source Domain Adaptation and Label Unification with Partial Datasets
|
| 2 |
+
|
| 3 |
+
Rui Gong $^{1}$ , Dengxin Dai $^{1,4}$ , Yuhua Chen $^{1}$ , Wen Li $^{3}$ , Luc Van Gool $^{1,2}$ $^{1}$ Computer Vision Lab, ETH Zurich, $^{2}$ VISICS, KU Leuven, $^{3}$ UESTC, $^{4}$ MPI for Informatics
|
| 4 |
+
{gongr, dai, yuhua.chen, vangool}@vision.ee.ethz.ch, liwenbnu@gmail.com
|
| 5 |
+
|
| 6 |
+
# Abstract
|
| 7 |
+
|
| 8 |
+
One challenge of object recognition is to generalize to new domains, to more classes and/or to new modalities. This necessitates methods to combine and reuse existing datasets that may belong to different domains, have partial annotations, and/or have different data modalities. This paper formulates this as a multi-source domain adaptation and label unification problem, and proposes a novel method for it. Our method consists of a partially-supervised adaptation stage and a fully-supervised adaptation stage. In the former, partial knowledge is transferred from multiple source domains to the target domain and fused therein. Negative transfer between unmatched label spaces is mitigated via three new modules: domain attention, uncertainty maximization and attention-guided adversarial alignment. In the latter, knowledge is transferred in the unified label space after a label completion process with pseudo-labels. Extensive experiments on three different tasks - image classification, 2D semantic image segmentation, and joint 2D-3D semantic segmentation - show that our method outperforms all competing methods significantly.
|
| 9 |
+
|
| 10 |
+
# 1. Introduction
|
| 11 |
+
|
| 12 |
+
The development of object recognition is carried by two pillars: large-scale data annotation and deep neural networks. With new applications coming out every day, researchers need to constantly develop new methods and create new datasets. While we are able to develop novel neural networks for new tasks, the creation of new datasets can hardly keep up due to its huge cost. In the literature, a diverse set of learning paradigms, such as self-learning [13], semi-supervised learning [17] and transfer learning [6], have been developed to come to the rescue. We enrich this repository by developing a method to combine multiple existing datasets that have been annotated in different domains, for smaller-scale tasks (fewer classes), and/or with fewer data modalities. The importance of the method
|
| 13 |
+
|
| 14 |
+

|
| 15 |
+
Figure 1: mDALU learns a complete-class and complete-modality object recognition model for a new, unlabeled target domain, by using multiple datasets with partial-class annotation and partial data modality as source domains.
|
| 16 |
+
|
| 17 |
+
can be justified by the fact that as time goes, research goals will become more and more ambitious, so object recognition models for more classes, new domains, and/or more data modalities are necessary.
|
| 18 |
+
|
| 19 |
+
To address this, we propose a multi-source domain adaptation and label unification (mDALU) problem. In this setting, there are multiple source domains and an unlabeled target domain. In each source domain, only samples (images, pixels, or LiDAR points) belonging to a subset of classes are labeled; the rest are unlabeled. The subsets of classes having labels can be different over different source domains, and can have inconsistent taxonomies, e.g., truck is labeled as "truck" in one source domain but labeled as "vehicle" together with other types of vehicles in another. Further, the data modalities in different source domains can also be different, e.g., one contains images and the other contains LiDAR point clouds. The goal is to obtain an object recognition model for all classes in the target domain. Fig. 1 shows an exemplar setting of mDALU. A comparison to other domain adaptation settings, in Table 1, shows that mDALU is very flexible.
|
| 20 |
+
|
| 21 |
+
This goal is challenging. Firstly, there is the notorious issue of negative transfer. While negative transfer is
|
| 22 |
+
|
| 23 |
+
<table><tr><td>Domain Adaptation Setting</td><td>Can Handle Multiple Source Domains?</td><td>Can Handle Multiple Data Modalities?</td><td>Can Handle Different Label Spaces of Source Domains?</td><td>Change of Label Space Size from Source to Target Domain</td><td>Can Handle Partial Annotations?</td><td>Can Handle Inconsistent Taxonomy?</td></tr><tr><td>Unsupervised Domain Adaptation [10]</td><td>No</td><td>No</td><td>-</td><td>Same Size</td><td>No</td><td>-</td></tr><tr><td>Partial Domain Adaptation [3]</td><td>No</td><td>No</td><td>-</td><td>Reduced</td><td>No</td><td>-</td></tr><tr><td>Multi-Source Domain Adaptation [26, 43]</td><td>Yes</td><td>No</td><td>No</td><td>Same Size</td><td>No</td><td>No</td></tr><tr><td>Category-Shift Multi-Source Domain Adaptation [39]</td><td>Yes</td><td>No</td><td>Yes</td><td>Increased</td><td>No</td><td>No</td></tr><tr><td>Multi-Modal Domain Adaptation [18]</td><td>Yes</td><td>Yes</td><td>No</td><td>Same Size</td><td>No</td><td>No</td></tr><tr><td>Multi-Source Open-Set Domain Adaptation [27, 25]</td><td>Yes</td><td>No</td><td>No</td><td>Same Size + 1*</td><td>Yes</td><td>No</td></tr><tr><td>Multi-Source Domain Adaptation and Label Unification (mDALU)</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Increased</td><td>Yes</td><td>Yes</td></tr></table>
|
| 24 |
+
|
| 25 |
+
Table 1: Comparison between our mDALU and other domain adaptation settings (see Sec. 2 for details). It is clear that mDALU offers a very flexible and general setting. * “1” means an additional “unknown” class in the target domain.
|
| 26 |
+
|
| 27 |
+
an issue also for standard transfer and multi-task learning, it is especially severe in our mDALU task due to the influence of unlabeled classes. To address this, we propose three novel modules, termed domain attention, uncertainty maximization and attention-guided adversarial alignment, to avoid making confident predictions for unlabeled samples in the source domains, and to enable robust distribution alignment between the source domains and the target domain. The method with the aforementioned modules and attention-guided prediction fusion is able to generate good results in the unified label space and on the target domain. In order to further improve the results, we need to fuse the supervision of all partial datasets to transfer the supervision in the unified label space. To this aim, we propose a pseudo-label based supervision fusion module. In particular, we generate pseudo-labels for the unlabeled samples in the source domains and all samples in the target domain. Standard supervised learning is then performed in the unified label space for the final model.
|
| 28 |
+
|
| 29 |
+
To showcase the effectiveness of our method, we evaluate it on three different tasks: image classification, 2D semantic image segmentation, and joint 2D-3D semantic segmentation. Synthetic and real data, and images and LiDAR point clouds are involved. Also, non-overlapping, partially-overlapping and fully-overlapping label spaces, and consistent and inconsistent taxonomies across source domains are covered. Experiments show that our method outperforms all competing methods significantly.
|
| 30 |
+
|
| 31 |
+
# 2. Related Work
|
| 32 |
+
|
| 33 |
+
Multi-Source Domain Adaptation. Transfer learning and domain adaptation have been extensively studied in the past years. Several effective strategies have been developed such as minimizing maximum mean discrepancy [36, 23], moment matching [40], adversarial domain confusion [10, 35], entropy regularization [37], and curriculum domain adaptation [9]. While great progress has been achieved, most algorithms focus on the single-source adaptation setting. This limits the methods from being used when data is collected from multiple source domains. That is why multi-source domain adaptation methods are proposed [8, 42, 26, 15, 43]. Yet, these methods all assume the same label space for all domains. Xu et al. [39] explores the problem of the category shift among different source domains, and adopts the
|
| 34 |
+
|
| 35 |
+
k-way domain discriminator to reduce the effect of category shift. But the method is mainly proposed for the image classification task, and cannot deal with the problem of partial annotation, inconsistent taxonomies and modal differences among different source domains.
|
| 36 |
+
|
| 37 |
+
Open-Set/Partial Domain Adaptation. Recent research explores the category "openness" between the source domain and the target domain, which is divided into open-set domain adaptation and partial domain adaptation. Open-set domain adaptation [25, 33, 27] assumes that the target domain includes new classes that are unseen in the source domain, and aims to classify the unseen class samples as "unknown" class in the target domain. Partial domain adaptation [2, 41, 3, 19] aims to transfer knowledge from existing large-scale domains (e.g. 1K classes) to unknown small-scale domains (e.g. 20 classes) for customized applications. Different than both open-set and partial domain adaptation, our label space of the target domain is the union of label spaces of all source domains.
|
| 38 |
+
|
| 39 |
+
Learning from multiple datasets. Several successful methods [28, 29, 38, 20] have been proposed to learn a single universal network, that can represent different domains with a minimum of domain-specific parameters. But those methods do not consider domain adaptation and label space unification. Recently, Lambert et al. [21] presented a composite dataset that unifies different semantic segmentation datasets by reconciling the taxonomies, merging and splitting classes manually. But they do not address the problem of domain adaptation, partial annotation and cross-modal data, and they rely on the manual re-annotation for unification. The object detection method by Zhao et al. [44] performs label space unification from multiple datasets with partial annotations, but it does not consider other problems that are considered by our method such as domain discrepancies, inconsistent taxonomies and mismatched data modalities across the datasets.
|
| 40 |
+
|
| 41 |
+
# 3. Approach
|
| 42 |
+
|
| 43 |
+
# 3.1. Problem Statement
|
| 44 |
+
|
| 45 |
+
For the problem of mDALU, we are given $K$ source domains $\mathcal{S}_1,\mathcal{S}_2,\dots,\mathcal{S}_K$ . The $K$ source domains contain the samples from $K$ different distributions $P_{S_1},P_{S_2},\ldots ,P_{S_K}$ which are labeled with $C_1,C_2,\ldots ,C_K$ classes, resp. All
|
| 46 |
+
|
| 47 |
+
the source domains can contain both partially labeled and unlabeled samples. The unlabeled samples can belong to the labeled classes of other domains. The label spaces $\mathcal{C}_1, \mathcal{C}_2, \dots, \mathcal{C}_K$ can be non-, partially-, or fully-overlapping with each other. Moreover, both consistent and inconsistent taxonomies among $\mathcal{C}_1, \mathcal{C}_2, \dots, \mathcal{C}_K$ are allowed. Then the union of the label spaces $\mathcal{C}_i, i = 1, \dots, K$ forms the unified and complete label space $\mathcal{C}_{\cup} = \mathcal{C}_1 \cup \mathcal{C}_2 \cdots \mathcal{C}_K$ , including $C_{\cup}$ classes. Besides, the unlabeled target domain $\mathcal{T}$ is given, containing samples from the distribution $P_T$ . Denoting the source samples $\mathbf{x}^{s_i} \in S_i, i = 1, \dots, K$ and the target samples $\mathbf{x}^t \in \mathcal{T}$ , we have $\mathbf{x}^{s_i} \sim P_{S_i}, \mathbf{x}^t \sim P_T$ , $P_{S_1} \neq P_{S_2} \neq \dots \neq P_{S_K} \neq P_T$ . The mDALU problem aims at training the model on the $K$ source domains $S_i, i = 1, \dots, K$ , labeled with $C_i$ classes in each, and the unlabeled target domain $\mathcal{T}$ , to improve the performance of the model on the target domain $\mathcal{T}$ in the unified label space $\mathcal{C}_{\cup}$ . We use $\mathbf{y}^{s_i}$ to indicate the ground-truth label map of $\mathbf{x}^{s_i}$ . Note that we present most of our approach with the notation of 2D semantic image segmentation. The translation to image classification and 3D point cloud segmentation is straightforward - by replacing pixels with images and by replacing pixels with 3D LiDAR points.
|
| 48 |
+
|
| 49 |
+
# 3.2. Our Approach to mDALU problem
|
| 50 |
+
|
| 51 |
+
As shown in Fig. 2, there are two stages in our approach, the partially-supervised adaptation stage and the fully supervised adaptation stage. In the partially-supervised adaptation stage, the partial supervision is transferred to the target domain from different source domains, respectively. Then in the fully-supervised adaptation stage, the supervision, in complete label space, is fused and self-completed on the unlabeled samples, and jointly transferred in the source domains and target domain. In order to realize adaptation under partial supervision, we propose three modules: DAT, UM and $\mathrm{A}^3$ for the first stage. Then in the second stage, we use PSF and further learning. Below we provide details of all these components. From Sec. 3.2.1 to Sec. 3.2.5, we first introduce our method for mDALU under consistent taxonomies. In this part, we first describe a basic version of our method composed of DAT and inference via attention-guided fusion, which will be followed by UM and $\mathrm{A}^3$ to enhance the adaptation ability. Finally, we present PSF. Then in Sec. 3.2.6, we extend our proposed method towards mDALU under inconsistent taxonomies.
|
| 52 |
+
|
| 53 |
+
# 3.2.1 Partially-Supervised Learning
|
| 54 |
+
|
| 55 |
+
Different segmentation networks $G_{i}, i = 1, \dots, K$ are adopted for different source domains $S_{i}$ . While their annotations cover partial label spaces $\mathcal{C}_{i}$ , we train each network $G_{i}$ in the unified label space $\mathcal{C}_{\cup}$ - some classes have no training data - with a standard cross-entropy loss $\mathcal{L}_{\mathrm{psu}}$ . The network $G_{i}$ is composed of a feature extractor $E_{i}$ and a la
|
| 56 |
+
|
| 57 |
+
bel predictor $B_{i}$ , i.e., $G_{i} = \{E_{i}, B_{i}\}$ . While we can average the results of these models directly in the target domain for predictions in the unified label space, coined multi-branch (MBR) fusion, this generates poor results. This is because the predictions of each model $G_{i}$ for its unlabeled classes in $\mathcal{C}_{\cup} \setminus \mathcal{C}_{i}$ can be arbitrary numbers that dominate the averages. We thus propose the domain attention (DAT) module, which learns the attention map for $G_{i}$ to signal on which area its prediction is reliable, for more effective fusion.
|
| 58 |
+
|
| 59 |
+
The attention map $\mathbf{a}^{s_i}$ in domain $S_{i}$ is defined as:
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
\mathbf {a} ^ {s _ {i}} (h, w) \left\{ \begin{array}{l} = 1, \text {i f} \mathbf {y} ^ {s _ {i}} (h, w) \in \mathcal {C} _ {i} \\ = 0, \text {i f} \mathbf {y} ^ {s _ {i}} (h, w) = \text {v o i d}, \end{array} \right. \tag {1}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
where $(h,w)$ are pixel indices and void means no label. We train an attention network $M_{i}$ for each source domain $S_{i}$ . The attention maps are predicted as $\tilde{\mathbf{a}}^{s_i} = M_i(\mathbf{x}^{s_i})$ and $\tilde{\mathbf{a}}_i^t = M_i(\mathbf{x}^t)$ . The attention network $M_{i}$ is composed of the feature extractor $E_{i}$ and a new label predictor $B_{i}^{M}$ : $M_{i} = \{E_{i},B_{i}^{M}\}$ . $M_{i}$ is trained under an MSE loss $\mathcal{L}_{\mathrm{att}}$ together with $G_{i}$ in a multi-task setting.
|
| 66 |
+
|
| 67 |
+
# 3.2.2 Inference via Attention-Guided Fusion
|
| 68 |
+
|
| 69 |
+
We feed an image $\mathbf{x}$ into semantic segmentation networks $G_{i}$ to generate the corresponding probability maps $\hat{\mathbf{p}}_i\in [0,1]^{H\times W\times C_\cup}$ , and into different attention networks $M_{i}$ to generate attention maps $\hat{\mathbf{a}}_i$ . Then we fuse the predictions by averaging $\hat{\mathbf{p}}_i$ weighted by $\hat{\mathbf{a}}_i$ :
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
\mathbf {f} = \frac {\sum_ {i = 1} ^ {K} \hat {\mathbf {a}} _ {i} \otimes \hat {\mathbf {p}} _ {i}}{\sum_ {j = 1} ^ {C _ {\cup}} \left(\sum_ {i = 1} ^ {K} \hat {\mathbf {a}} _ {i} \otimes \hat {\mathbf {p}} _ {i}\right) ^ {(j)}}, \tag {2}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
where $(\sum_{i=1}^{K} \hat{\mathbf{a}}_i \otimes \hat{\mathbf{p}}_i)^{(j)}$ yields the probability of the $j^{\text{th}}$ class. The predicted class is then obtained via argmax.
|
| 76 |
+
|
| 77 |
+
# 3.2.3 Uncertainty Maximization (UM)
|
| 78 |
+
|
| 79 |
+
Due to the lack of ground truth class supervision, while we have the attention-guided fusion, the wrong prediction of unlabeled samples in the source domains can still have negative effects for our cross-domain prediction fusion. In order to further reduce the negative effects of unlabeled samples $\mathbf{x}_u^{s_i}$ in source domains, we propose a module specifically to maximize uncertainties of the predictions on unlabeled samples in those domains. In particular, $G_{i}(\mathbf{x}_{u}^{s_{i}})$ is expected to equally spread the probability mass to all classes, i.e., obeying the uniform categorical distribution $\mathcal{U}(C_{\cup})$ . The probability density function $q(j)$ of $\mathcal{U}(C_{\cup})$ is formulated as $q(j) = \frac{1}{C_{\cup}}$ , where $j = 1,2,\dots,C_{\cup}$ is to represent different classes. The probability distribution of the network prediction on unlabeled samples $G_{i}(\mathbf{x}_{u}^{s_{i}})$ is denoted as $p(j) = G_{i}(\mathbf{x}_{u}^{s_{i}})^{(j)}$ , where $G_{i}(\mathbf{x}_{u}^{s_{i}})^{(j)}$ represents the probability of the $j^{\mathrm{th}}$ class. In order to maximize the
|
| 80 |
+
|
| 81 |
+

|
| 82 |
+
|
| 83 |
+

|
| 84 |
+
(a) Partially-Supervised Adaptation
|
| 85 |
+
(b) Fully-Supervised Adaptation
|
| 86 |
+
Figure 2: Illustration of our approach to mDALU. There are 2 stages, (a) partially supervised adaptation and (b) fully-supervised adaptation.
|
| 87 |
+
|
| 88 |
+
uncertainty of the prediction on the unlabeled samples, the distribution distance between $p(j)$ and $q(j)$ is expected to be minimized. Following the distribution distance metric in [5], we adopt the Pearson $\chi^2$ -divergence for measuring the distribution distance, which is formulated as,
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
D _ {\chi^ {2}} (p | | q) = \int_ {j} \left(\left(\frac {p (j)}{q (j)}\right) ^ {2} - 1\right) q (j), \tag {3}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
D _ {\chi^ {2}} (p \| q) = C _ {\cup} \sum_ {j = 1} ^ {C _ {\cup}} p (j) ^ {2} - 1. \tag {4}
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
On the basis of Eq. (4), we propose the square loss $\mathcal{L}_{um}$ for minimizing the Pearson $\chi^2$ -divergence, i.e., maximizing the uncertainty of the prediction on the unlabeled samples. $\mathcal{L}_{um}$ can be written as
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
\mathcal {L} _ {u m} = \sum_ {j = 1} ^ {C _ {\cup}} \left(G _ {i} \left(\mathbf {x} _ {u} ^ {s _ {i}}\right) ^ {(j)}\right) ^ {2}. \tag {5}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
Through the UM module, we encourage the model to make uniform categorical probability predictions, $\frac{1}{C_{\cup}}$ , for unlabeled samples over the unlabeled classes, to best preserve the uncertainty to let the ground truth supervision of those classes from other source domains make the decision in the further attention-guided fusion and PSF process.
|
| 105 |
+
|
| 106 |
+
# 3.2.4 Attention-Guided Adversarial Alignment $(\mathbf{A}^3)$
|
| 107 |
+
|
| 108 |
+
It has been proven in the literature that adversarial alignment is effective for domain adaptation. We extend the idea to mDALU. For adversarial alignment, one discriminator
|
| 109 |
+
|
| 110 |
+
$D_{i}$ is used for each source domain, to align the distribution between the source domain $S_{i}$ and the target domain $\mathcal{T}$ . In general unsupervised domain adaptation, the discriminator training loss $\mathcal{L}_d$ and the adversarial loss $\mathcal{L}_{adv}$ [34] for the source domain $S_{i}$ and the target domain $\mathcal{T}$ is defined as
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
\mathcal {L} _ {a d v} ^ {(i)} \left(\mathbf {x} ^ {t}\right) = - \log \left(D _ {i} \left(G _ {i} \left(\mathbf {x} ^ {t}\right)\right)\right) \tag {6}
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
$$
|
| 117 |
+
\begin{array}{l} \mathcal {L} _ {d} ^ {(i)} \left(\mathbf {x} _ {i} ^ {s _ {i}}, \mathbf {x} ^ {t}\right) = - \log \left(D _ {i} \left(G _ {i} \left(\mathbf {x} ^ {s _ {i}}\right)\right)\right) \\ - \log \left(1 - D _ {i} \left(G _ {i} \left(\mathbf {x} ^ {t}\right)\right)\right). \tag {7} \\ \end{array}
|
| 118 |
+
$$
|
| 119 |
+
|
| 120 |
+
Yet, in our mDALU problem, there is no ground truth label guidance available for the unlabeled classes. A direct alignment between the source domain and the target domain will cause negative transfer, i.e., the transfer of incorrect knowledge from the unlabeled parts in the source domains to the target domain. Here, we again use our attention map $\mathbf{a}^{s_i}$ to alleviate this problem by proposing an attention-guided adversarial loss:
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
\mathcal {L} _ {a ^ {3}} ^ {(i)} \left(\mathbf {x} ^ {t}\right) = - \log \left(D _ {i} \left(G _ {i} \left(\mathbf {x} ^ {t}\right) \otimes M _ {i} \left(\mathbf {x} ^ {t}\right)\right)\right), \tag {8}
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
\begin{array}{l} \mathcal {L} _ {d} ^ {(i)} \left(\mathbf {x} _ {i} ^ {s _ {i}}, \mathbf {x} ^ {t}\right) = - \log \left(D _ {i} \left(G _ {i} \left(\mathbf {x} ^ {s _ {i}}\right) \otimes M _ {i} \left(\mathbf {x} ^ {s _ {i}}\right)\right)\right) \\ - \log \left(1 - D _ {i} \left(G _ {i} \left(\mathbf {x} ^ {t}\right) \otimes M _ {i} \left(\mathbf {x} ^ {t}\right)\right)\right), \tag {9} \\ \end{array}
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
where $\otimes$ represents element-wise multiplication.
|
| 131 |
+
|
| 132 |
+
Then the overall loss for our method at the first stage is:
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
\mathcal {L} _ {a l l} = \mathcal {L} _ {p s u} + \mathcal {L} _ {a t t} + \mathcal {L} _ {u m} + \lambda \sum_ {i = 1} ^ {K} \mathcal {L} _ {a ^ {3}} ^ {(i)}, \tag {10}
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
where $\lambda$ is the hyper-parameter to balance out the attention-guided adversarial loss against other losses. The whole optimization objective for our first partially-supervised domain adaptation stage can be formulated as:
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
\min _ {G _ {i}} \max _ {D _ {i}} \mathcal {L} _ {\text {a l l}}. \tag {11}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
# 3.2.5 Pseudo-Label Based Supervision Fusion (PSF)
|
| 145 |
+
|
| 146 |
+
In the first partially-supervised adaptation stage, knowledge in different label spaces $\mathcal{C}_i$ is transferred from different source domains to the target domain. In the second fully-supervised adaptation stage, we aim at learning and transferring knowledge in the complete and unified label space $\mathcal{C}_{\cup}$ between all domains jointly. In order to realize that, we complete the label spaces for all the related domains $S_{1}, S_{2}, \ldots, S_{K}, \mathcal{T}$ with pseudo-labels, i.e., fuse the supervision from different label spaces $\mathcal{C}_i$ to get the complete and unified supervision $\mathcal{C}_{\cup}$ . Here we present our pseudo-label based supervision fusion (PSF) method.
|
| 147 |
+
|
| 148 |
+
In order to complete the label space in the source domain $S_{i}$ , we feed each of the source image samples $\mathbf{x}^{s_i}$ into every semantic model $G_{k}, k = 1, \dots, K$ , to generate 'partial' semantic probability maps $\hat{\mathbf{p}}_k^{s_i} \in [0,1]^{H \times W \times C_\cup}$ and to every
|
| 149 |
+
|
| 150 |
+
attention network $M_{k}, k = 1, \dots, K$ for the attention map $\hat{\mathbf{a}}_k^{s_i} \in [0,1]^{H \times W}$ . The fused prediction $\mathbf{f}^{s_i}$ is obtained via Eq.(2). We denote the predicted label map as $\bar{\mathbf{y}}^{s_i}$ , generated by using an argmax operation over $\mathbf{f}^{s_i}$ . The 'pseudo-label' map $\hat{\mathbf{y}}^{s_i}$ for the source domain $S_i$ is defined as:
|
| 151 |
+
|
| 152 |
+
$$
|
| 153 |
+
\hat {\mathbf {y}} ^ {s _ {i}} (h, w) = \left\{ \begin{array}{l} \mathbf {y} ^ {s _ {i}} (h, w), \text {i f} \mathbf {y} ^ {s _ {i}} (h, w) \neq \text {v o i d} \\ \bar {\mathbf {y}} ^ {s _ {i}} (h, w) \text {i f} \mathbf {y} ^ {s _ {i}} (h, w) = \text {v o i d} \\ \text {a n d} \mathbf {f} ^ {s _ {i}} (h, w, \bar {\mathbf {y}} ^ {s _ {i}} (h, w)) > \delta \\ \text {v o i d , o t h e r w i s e} \end{array} \right. \tag {12}
|
| 154 |
+
$$
|
| 155 |
+
|
| 156 |
+
where $\delta$ is a threshold determining whether to select the predicted pseudo-label.
|
| 157 |
+
|
| 158 |
+
On the target domain $\mathcal{T}$ , since no ground truth labels are available, we obtain pseudo labels directly from the predicted label map $\bar{\mathbf{y}}^t$ (obtained from $\mathbf{f}^t$ via an argmax):
|
| 159 |
+
|
| 160 |
+
$$
|
| 161 |
+
\hat {\mathbf {y}} ^ {t} (h, w) = \bar {\mathbf {y}} ^ {t} (h, w) \text {i f} \mathbf {f} ^ {t} (h, w, \bar {\mathbf {y}} ^ {t} (h, w)) > \delta . \tag {13}
|
| 162 |
+
$$
|
| 163 |
+
|
| 164 |
+
By using the generated fused pseudo-label $\hat{\mathbf{y}}^{s_i},\hat{\mathbf{y}}^t,i = 1,\dots,K$ , we complete the label space from $\mathcal{C}_i$ to $\mathcal{C}_{\cup}$ for the source domain $S_{i}$ , and from $\varnothing$ to $\mathcal{C}_{\cup}$ for the target domain $\mathcal{T}$ . We then train the network $G$ for all the related domains $S_{1},S_{2},\ldots ,S_{K},\mathcal{T}$ with all the datasets in the unified label space. In total, the loss $\mathcal{L}_{fsa}$ for our second 'fullysupervised' adaptation stage is:
|
| 165 |
+
|
| 166 |
+
$$
|
| 167 |
+
\mathcal {L} _ {f s a} = \sum_ {i = 1} ^ {K} \mathcal {L} _ {c e} ^ {s _ {i}} + \mathcal {L} _ {c e} ^ {t}, \tag {14}
|
| 168 |
+
$$
|
| 169 |
+
|
| 170 |
+
where $\mathcal{L}_{ce}$ is the standard cross-entropy loss.
|
| 171 |
+
|
| 172 |
+
# 3.2.6 Inconsistent Taxonomies
|
| 173 |
+
|
| 174 |
+
The above method is able to deal with the mDALU problem under consistent taxonomies, i.e., the different classes in all source domains are exclusive with each other. Yet, there might be inconsistent taxonomies between different source domains, causing a performance drop for the inconsistent taxonomies classes. Here, we introduce the extension of our above method, to handle the inconsistent taxonomies problem. Denoting the classes in the label spaces $\mathcal{C}_i$ as $\mathbf{c}_i^o$ , we have $\mathcal{C}_i = \{\mathbf{c}_i^o,o = 1,2,\dots,C_i\}$ . Then the inconsistent taxonomies among different source domains can be defined as, $\exists \mathbf{c}_p^q\in \mathcal{C}_p,\mathbf{c}_m^n\in \mathcal{C}_m,p,m = 1,\dots,K,p\neq m,q = 1,\dots,\mathcal{C}_p,n = 1,\dots,\mathcal{C}_m$ , we have $\mathbf{c}_p^q\neq \mathbf{c}_m^n$ , and $\mathbf{c}_p^q\cap \mathbf{c}_m^n\neq \emptyset$ . The inconsistent taxonomies classes between different source domains $\mathcal{S}_p$ and $\mathcal{S}_m$ are denoted as $\mathbf{c}_p^q\in \mathcal{C}_p$ and $\mathbf{c}_m^n\in \mathcal{C}_m$ . For example, the truck is labeled as "truck" class $\mathbf{c}_p^q$ in one dataset $\mathcal{S}_p$ , while it is labeled as "vehicle" class $\mathbf{c}_m^n$ together with other vehicles in another dataset $\mathcal{S}_m$ . Another typical example is motorcycles being labeled as "cycle" class $\mathbf{c}_p^q$ together with other cycles in one dataset $\mathcal{S}_p$ , but being labeled as "vehicle" class $\mathbf{c}_m^n$ together with other
|
| 175 |
+
|
| 176 |
+
vehicles in another dataset $S_{m}$ . In the unified label space of the target domain, the conflict part $\mathbf{c}_p^q\cap \mathbf{c}_m^n$ is assigned to either $\mathbf{c}_p^q$ or $\mathbf{c}_m^n$ exclusively. Without loss of generality and for reasons of clarity, it is assumed that the $\mathbf{c}_p^q\cap \mathbf{c}_m^n$ is assigned to $\mathbf{c}_p^q$ . Then in order to solve the conflict of $\mathbf{c}_p^q$ and $\mathbf{c}_m^n$ , in the attention-guided fusion, we introduce the additional class-wise weight map $\mathbf{w}_i\in \mathbb{R}^{H\times W\times C_\cup}$ , and Eq. (2) is extended to Eq. (16),
|
| 177 |
+
|
| 178 |
+
$$
|
| 179 |
+
\mathbf {w} _ {i} (h, w, j) = \left\{ \begin{array}{l} = v, \text {i f} \arg \max \hat {\mathbf {p}} _ {i} (h, w) = q ^ {\prime}, \text {a n d} i = p, \\ \text {a n d} \arg \max \hat {\mathbf {p}} _ {m} (h, w) = n ^ {\prime}, \text {a n d} j = q ^ {\prime} \\ = 1, \text {o t h e r w i s e} \end{array} \right. \tag {15}
|
| 180 |
+
$$
|
| 181 |
+
|
| 182 |
+
$$
|
| 183 |
+
\mathbf {f} = \frac {\sum_ {i = 1} ^ {K} \hat {\mathbf {a}} _ {i} \otimes \hat {\mathbf {p}} _ {i} \otimes \mathbf {w} _ {i}}{\sum_ {j = 1} ^ {C _ {\cup}} \left(\sum_ {i = 1} ^ {K} \hat {\mathbf {a}} _ {i} \otimes \hat {\mathbf {p}} _ {i} \otimes \mathbf {w} _ {i}\right) ^ {(j)}}, \tag {16}
|
| 184 |
+
$$
|
| 185 |
+
|
| 186 |
+
where $v > 1$ in Eq. (15) is a hyper-parameter, set to 5.0. $v$ is used to increase the weight of class $\mathbf{c}_p^q$ of the corresponding prediction $\hat{\mathbf{p}}_p$ in Eq. (16), to convert $\mathbf{c}_p^q\cap \mathbf{c}_m^n$ to $\mathbf{c}_p^q$ in the prediction fusion. $q^{\prime}$ $n^\prime$ are the class indices of $\mathbf{c}_p^q$ and $\mathbf{c}_m^n$ in the unified label space $\mathcal{C}_{\cup}$ . Correspondingly, under inconsistent taxonomies, besides the unlabeled samples in the source domains being completed with the predicted pseudo-label as in Eq. (12), the conflict part $\mathbf{c}_p^q\cap \mathbf{c}_m^n$ , which is labeled as $\mathbf{c}_m^n$ originally in $S_{m}$ , is relabeled with the predicted pseudo-label $\bar{\mathbf{y}}^{s_i}(h,w)$ , i.e.,
|
| 187 |
+
|
| 188 |
+
$$
|
| 189 |
+
\begin{array}{l} \hat {\mathbf {y}} ^ {s _ {m}} (h, w) = q ^ {\prime}, \text {i f} \mathbf {f} ^ {s _ {m}} (h, w, q) > \delta \\ \text {a n d} \bar {\mathbf {y}} ^ {s _ {m}} (h, w) = q ^ {\prime} \text {a n d} \mathbf {y} ^ {s _ {m}} (h, w) = n ^ {\prime}. \tag {17} \\ \end{array}
|
| 190 |
+
$$
|
| 191 |
+
|
| 192 |
+
# 4. Experiments
|
| 193 |
+
|
| 194 |
+
We evaluate the effectiveness of our method mDALU under different settings. We build benchmarks for image classification, 2D semantic image segmentation, and 2D-3D cross-modal semantic segmentation.
|
| 195 |
+
|
| 196 |
+
# 4.1. Image Classification
|
| 197 |
+
|
| 198 |
+
Setup. In the classification benchmark, we adopt the digits classification images from three different datasets, MNIST [22], SyntheticDigits [10], and SVHN [24], coined "MT", "SYN" and "SVHN", resp. Each time, one of them is taken as the target domain, the other two as source domains. There are 10 classes, from '0' to '9', in the target domain. In our main setting, we adopt the most difficult setup to evaluate different methods, where the label spaces of different source domains are non-overlapping. Only half the classes are labeled in each of the source domains. The partially-overlapping situation is also explored. For fair comparison, we adopt the same network architecture used in [26] for all methods. The classification performance is evaluated on all 10 classes in the target domain.
|
| 199 |
+
|
| 200 |
+
<table><tr><td>Method</td><td>MT</td><td>SYN</td><td>SVHN</td><td>Avg</td></tr><tr><td>Source</td><td>76.76 ± 0.63</td><td>61.77 ± 1.05</td><td>43.42±1.89</td><td>60.65±1.19</td></tr><tr><td>DANN[10]</td><td>77.30 ± 2.57</td><td>60.31 ± 0.99</td><td>41.65±2.34</td><td>59.75±1.97</td></tr><tr><td>DANN *</td><td>71.29 ± 0.48</td><td>55.94 ± 0.51</td><td>35.60 ± 1.63</td><td>54.28 ± 0.87</td></tr><tr><td>DCTN [39]</td><td>68.10±0.2</td><td>62.72±0.30</td><td>48.11±0.57</td><td>59.64±0.36</td></tr><tr><td>DCTN *</td><td>72.01 ± 1.22</td><td>63.33 ± 0.20</td><td>49.34 ± 1.28</td><td>61.59 ± 0.90</td></tr><tr><td>M3SDA [26]</td><td>76.56±0.71</td><td>61.25±2.33</td><td>43.13±3.55</td><td>60.31±2.20</td></tr><tr><td>M3SDA *</td><td>72.50 ± 2.64</td><td>55.92 ± 1.04</td><td>36.24 ± 1.70</td><td>54.89 ± 1.79</td></tr><tr><td>AENT[44]</td><td>73.24±1.76</td><td>68.66±1.32</td><td>52.80 ± 0.92</td><td>64.90 ± 1.33</td></tr><tr><td>Ours w/o PSF</td><td>81.23±0.92</td><td>78.97±0.45</td><td>65.20±0.58</td><td>75.13±0.65</td></tr><tr><td>DCTN w/ PL [39]</td><td>73.40±0.85</td><td>65.63±0.43</td><td>52.12±0.07</td><td>63.72 ± 0.45</td></tr><tr><td>AENT[44] w/ PL</td><td>78.56±1.23</td><td>70.25 ± 0.39</td><td>59.24 ± 1.01</td><td>69.35 ± 0.88</td></tr><tr><td>Ours</td><td>86.18±0.45</td><td>81.91±0.33</td><td>68.92±0.81</td><td>79.00 ± 0.53</td></tr></table>
|
| 201 |
+
|
| 202 |
+
Table 2: Quantitative comparison of image classification. "MT", "SYN", and "SVHN" represent the target domain. "PL" represents to add the pseudo-label training module, which is specifically adjusted according to their own paper's design. * represents to remove the unlabeled samples in the training data. We implement AENT for classification by utilizing the ambiguity cross entropy loss proposed in [44].
|
| 203 |
+
|
| 204 |
+
Comparison with SOTA. Table 2 compares our method with other SOTA methods which include 1) unsupervised domain adaptation method DANN [10], 2) category-shift unsupervised domain adaptation method DCTN [39], 3) multi-source unsupervised domain adaptation method $\mathbf{M}^3$ SDA [26], and 4) label unification method AENT [44]. It can be seen that without the pseudo label (PL) generation part, other domain adaptation based methods, DANN, DCTN, and $\mathbf{M}^3$ SDA show the negative transfer effect, or perform similarly to the baseline trained with source data only. This is because each source domain can only provide guidance for a partial label space, and the adaptation in the partial label space guides the prediction on the target domain to the biased label space when training with data from different source domains. This renders the prediction on the target domain contradictory and the model hard to adapt to the complete label space. In contrast, the label-unification based method AENT obtained a performance gain of $4.25\%$ , from $60.65\%$ to $64.90\%$ , compared with the source-only baseline. This is because it uses an ambiguity cross entropy loss, to avoid the prediction of the source domain data being restricted in a partial label space.
|
| 205 |
+
|
| 206 |
+
In our first partially-supervised adaptation stage, the performance is further improved to $75.13\%$ , which proves the effectiveness of our DAT, UM and $\mathrm{A}^3$ module for preventing the negative transfer effect. After the second fully-supervised adaptation stage, by adding the PSF module, our model strongly outperforms DCTN [39] and AENT [44], both with pseudo-label training, by $15.28\%$ and $9.65\%$ , resp. This proves the effectiveness of our entire method for domain adaptation, label space completion and supervision fusion. The ablation results in Table 3 show that each part of our model contributes to its performance.
|
| 207 |
+
|
| 208 |
+
Partially Overlapping. In Fig. 3, it is shown that the testing accuracy on the target domain increases, as more and more common classes in the source domains are avail-
|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
Figure 3: Accuracy in the target domain as a function of the number of overlapping classes between the source domains.
|
| 212 |
+
|
| 213 |
+
<table><tr><td>MBR</td><td>UM</td><td>A3</td><td>PSF</td><td>MT</td><td>SYN</td><td>SVHN</td><td>Avg</td></tr><tr><td></td><td></td><td></td><td></td><td>76.76 ± 0.63</td><td>61.77 ± 1.05</td><td>43.42±1.89</td><td>60.65±1.19</td></tr><tr><td>✓</td><td></td><td></td><td></td><td>72.21±1.89</td><td>62.41±0.58</td><td>50.24±1.23</td><td>61.62±1.23</td></tr><tr><td>✓</td><td>✓</td><td></td><td></td><td>84.74±0.54</td><td>76.12±0.85</td><td>58.39±0.57</td><td>73.08±0.65</td></tr><tr><td>✓</td><td>✓</td><td>✓*</td><td></td><td>81.38±0.79</td><td>78.20±1.3</td><td>65.12±0.64</td><td>74.90 ± 0.91</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td></td><td>81.23±0.92</td><td>78.97±0.45</td><td>65.20±0.58</td><td>75.13 ± 0.65</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>86.18±0.45</td><td>81.91±0.33</td><td>68.92±0.81</td><td>79.00 ± 0.53</td></tr></table>
|
| 214 |
+
|
| 215 |
+
Table 3: Ablation study under the image classification setting. MBR: multi-branch network, i.e., adopts different networks $G_{i}$ for different source domains. * indicates there is no adversarial part in the $\mathrm{A}^3$ module, i.e., only the DAT module. The best results are denoted in bold.
|
| 216 |
+
|
| 217 |
+
<table><tr><td>Method</td><td>MT</td><td>SYN</td><td>SVHN</td><td>Avg</td></tr><tr><td>Source</td><td>82.10±1.50</td><td>73.37±0.67</td><td>57.50±1.93</td><td>70.99 ± 1.37</td></tr><tr><td>DANN[10]</td><td>80.13±1.60</td><td>72.97±0.49</td><td>55.00±0.73</td><td>69.37 ± 0.94</td></tr><tr><td>DCTN[39]</td><td>78.56±0.47</td><td>72.33 ± 0.04</td><td>60.86±0.21</td><td>70.58 ± 0.24</td></tr><tr><td>M3SDA[26]</td><td>81.52 ± 1.55</td><td>72.91 ± 0.68</td><td>54.26±0.66</td><td>69.56 ± 0.96</td></tr><tr><td>AENT[44]</td><td>79.12 ± 1.07</td><td>81.99 ± 0.87</td><td>69.07 ± 1.93</td><td>76.73 ± 1.29</td></tr><tr><td>Ours w/o PSF</td><td>85.39 ± 1.32</td><td>85.33± 1.21</td><td>76.48±1.31</td><td>82.40 ± 1.28</td></tr></table>
|
| 218 |
+
|
| 219 |
+
Table 4: Quantitative comparison of image classification, under the partial overlap setting with 4 common classes.
|
| 220 |
+
|
| 221 |
+
able. In Table 4, we compare the model performance of our method with other SOTA methods when the source domains are partially overlapping, with 4 common classes. It is shown that our method still strongly outperforms the adaptation-based methods, DANN, DCTN, $\mathbf{M}^3\mathbf{SDA}$ , and the label unification based method, AENT, $82.40\%$ v.s. $69.37\%$ , $70.58\%$ , $69.56\%$ , $76.73\%$ . It further verifies the effectiveness of our model in the partial overlap situation.
|
| 222 |
+
|
| 223 |
+
# 4.2. 2D Semantic Image Segmentation
|
| 224 |
+
|
| 225 |
+
Setup. In the single mode semantic segmentation setting, we adopt the synthetic-to-real image semantic segmentation setup. The synthetic image datasets GTA5 [30] and the SYNTHIA [32] are taken as the source domains, while the real image dataset Cityscapes [7] is used as the target domain. Information of 19 classes needs to be transferred to the Cityscapes dataset. In our main setting, the label spaces of SYNTHIA and GTA5 are non-overlapping. In the SYNTHIA dataset, the label of 7 classes are available, incl. road, sidewalk, building, vegetation, sky, person and car. In GTA5, the labels of 12 classes are available, being wall, fence, pole, light, sign, terrain, rider, truck, bus, train, motorcycle and bicycle. Furthermore, we also explore the performance of our model when the images of the two source domains are fully labeled. Moreover, we ver
|
| 226 |
+
|
| 227 |
+
<table><tr><td>Method</td><td>NT</td><td>T</td></tr><tr><td>Source</td><td>17.7</td><td>24.0</td></tr><tr><td>AdaptSegNet[34]</td><td>7.7</td><td>30.8</td></tr><tr><td>MinEnt[37]</td><td>27.1</td><td>30.1</td></tr><tr><td>Advent[37]</td><td>11.8</td><td>30.3</td></tr><tr><td>Ours w/o PSF</td><td>36.3</td><td>38.1</td></tr><tr><td>Ours (ADV)</td><td>40.1</td><td>41.5</td></tr><tr><td>Ours (PSF)</td><td>37.3</td><td>42.4</td></tr><tr><td>Ours (ADV+PSF)</td><td>40.6</td><td>42.8</td></tr></table>
|
| 228 |
+
|
| 229 |
+
(a)
|
| 230 |
+
|
| 231 |
+
<table><tr><td>MBR</td><td>UM</td><td>A3</td><td>PSF</td><td>ADV</td><td>NT</td><td>T</td></tr><tr><td rowspan="2">✓</td><td></td><td></td><td></td><td></td><td>17.7</td><td>24.0</td></tr><tr><td></td><td></td><td></td><td></td><td>20.9</td><td>21.4</td></tr><tr><td>✓</td><td>✓</td><td></td><td></td><td></td><td>27.6</td><td>36.8</td></tr><tr><td>✓</td><td>✓</td><td>✓*</td><td></td><td></td><td>29.1</td><td>37.0</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td></td><td></td><td>36.3</td><td>38.1</td></tr><tr><td>✓</td><td>✓</td><td></td><td></td><td>✓</td><td>35.4</td><td>40.9</td></tr><tr><td>✓</td><td>✓</td><td></td><td>✓</td><td></td><td>31.4</td><td>41.5</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td></td><td>✓</td><td>40.1</td><td>41.5</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td></td><td>37.3</td><td>42.4</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>40.6</td><td>42.8</td></tr></table>
|
| 232 |
+
|
| 233 |
+
Table 5: (a) Quantitative comparison of single mode semantic segmentation, SYNTHIA+GTA5 $\rightarrow$ Cityscapes. The mIoU results are reported for 19 classes. (b) Ablation study for single mode segmentation. * indicates there is no adversarial part in the $A^3$ module, i.e., only the DAT module. "ADV+PSF" means to combine "ADV" and "PSF" by completing the label space and generating pseudo-labels in the source and target domains, then adversarial alignment in the output space is adopted during the second stage training.
|
| 234 |
+
|
| 235 |
+
if the effectiveness of our model when the taxonomies of different source domains are inconsistent. In those inconsistency experiments, for GTA5, the labels wall, fence, pole, light, sign, terrain, truck, bus, train, person (incl. person and rider), cycle (incl. bicycle and motorcycle) are available. In SYNTHIA, the labels road, sidewalk, building, vegetation, sky, person, rider, car, public facilities (incl. wall, fence, pole), motorcycle and bicycle are available. In order to further evaluate the performance of all methods when combined with the pixel-level domain adaptation methods [45, 16], we conduct experiments in two settings; 1) source domain images are not translated with CycleGAN [45], named as "NT"; 2) source domain images are translated with CycleGAN, named as "T". Also, in order to verify model performance combined with output-level adaptation method [34], we conduct additional experiments which include "ADV" in the fully-supervised adaptation stage. "ADV" generates the complete source domain label as in PSF, and then trains the semantic segmentation model via adversarial adaptation between pseudo-complete source domain and unlabeled target domain in the output-level space. For fair comparison, all the methods use the DeepLabv2-ResNet101 [4, 14] semantic segmentation network.
|
| 236 |
+
|
| 237 |
+
Comparison with SOTA. In Table 5a, we show a quantitative comparison for semantic segmentation between our method and other SOTA methods. It is shown that our method without adding PSF strongly outperforms the adaptation-based AdaptSegNet[34], the self-supervision-based MinEnt[37], and the method combining adaptation and self-supervision Advent [37]. Our method achieves $36.3\%$ and $38.1\%$ in the "NT" and "T" settings, resp. Similar to the image classification results, without using the translated source images, the adaptation-based methods suffer from negative transfer and the performance is lower than the source-only baseline. By using the translated source images in "T", different source domain images are
|
| 238 |
+
|
| 239 |
+
(b)
|
| 240 |
+
|
| 241 |
+
<table><tr><td>Method</td><td>Base</td><td>mIoU*</td><td>mIoU</td></tr><tr><td>Source</td><td>42.8</td><td>39.1</td><td></td></tr><tr><td>AdaptSegNet[34]</td><td>45.2</td><td>40.8</td><td></td></tr><tr><td>Minentropy[37]</td><td>46.4</td><td>42.2</td><td></td></tr><tr><td>Advent[37]</td><td>46.7</td><td>42.9</td><td></td></tr><tr><td>Ours w/o PSF</td><td>46.8</td><td>43.1</td><td></td></tr><tr><td>Source[43]</td><td>37.3</td><td>-</td><td></td></tr><tr><td>MADAN[43]</td><td>41.4</td><td>-</td><td></td></tr><tr><td>Ours w/o PSF</td><td>41.9</td><td>38.0</td><td></td></tr></table>
|
| 242 |
+
|
| 243 |
+
Table 6: Single mode segmentation results, under fully-labeled setting and “T”. mIoU* is the mean IoU of 16 classes in SYNTHIA, while mIoU is that of all 19 classes.
|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
(a) Image
|
| 247 |
+
Figure 4: Qualitative results of 2D semantic segmentation.
|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
(b) Ground Truth
|
| 251 |
+
|
| 252 |
+

|
| 253 |
+
(c) Source Only
|
| 254 |
+
|
| 255 |
+

|
| 256 |
+
(d) MinEnt
|
| 257 |
+
|
| 258 |
+

|
| 259 |
+
(e) Ours
|
| 260 |
+
|
| 261 |
+
all Citysapes-like images. The different source domains can be seen as a larger unified source domain, which can provide guidance for the complete label space to some extent. So all adaptation-based or self-supervision based methods perform much better in the "T" situation, compared with the non-adapted baseline. Yet, even in the "T" situation, our method still provides an advantage by further completing the label space, through our partially supervised adaptation. This proves the effectiveness of our method in preventing negative transfer and in completing the label space. By further adding the second "fully-supervised" adaptation stage, the model achieves a new SOTA performance in both the "T" and the "NT" settings. An ablation study, see Table 5b, confirms all parts of our method add to its performance, and the output space alignment "ADV" is helpful as well. Fig. 4 shows qualitative results on Cityscapes.
|
| 262 |
+
|
| 263 |
+
Fully labeled. In the fully labeled setting, i.e., the source domain images are labeled with all considered classes - 16 classes in SYNTHIA and 19 classes in GTA5 - Table 6 shows that our model still outperforms other unsupervised domain adaptive semantic segmentation methods, $43.1\%$ vs. $40.8\%$ , $42.2\%$ , and $42.9\%$ . Our model also outperforms the SOTA method for multi-source domain adaptive semantic segmentation MADAN [43], $41.9\%$ vs. $41.4\%$ .
|
| 264 |
+
|
| 265 |
+
Inconsistent Taxonomies. Table 7 shows that our method is advantageous when taxonomies are inconsistent, $40.0\%$ vs. $28.1\%$ , $31.9\%$ , $32.2\%$ . In the partially supervised adaptation stage, as in Sec. 3.2.6, by adding higher weights to "person", "rider", "motorcycle" and "bicycle" for SYNTHIA and "wall", "fence" and "pole" for GTA5, our method can achieve a higher performance than inference without weighting, $37.2\%$ vs. $35.3\%$ . After the fully supervised adaptation stage, the performance can be further improved to $40.0\%$ . The detailed performance for inconsis
|
| 266 |
+
|
| 267 |
+
<table><tr><td>Method</td><td>wall</td><td>fence</td><td>pole</td><td>person</td><td>rider</td><td>motorcycle</td><td>bicycle</td><td>mIoU</td></tr><tr><td>Source</td><td>2.6</td><td>12.0</td><td>12.3</td><td>40.6</td><td>0.5</td><td>0.1</td><td>28.6</td><td>19.8</td></tr><tr><td>AdaptSegNet[34]</td><td>7.1</td><td>2.6</td><td>4.0</td><td>33.2</td><td>6.9</td><td>1.8</td><td>37.6</td><td>28.1</td></tr><tr><td>Minentropy[37]</td><td>6.7</td><td>18.1</td><td>23.0</td><td>28.8</td><td>6.6</td><td>1.0</td><td>42.3</td><td>31.9</td></tr><tr><td>Advent[37]</td><td>6.2</td><td>11.5</td><td>11.4</td><td>32.8</td><td>12.2</td><td>0.9</td><td>41.2</td><td>32.2</td></tr><tr><td>Ours w/o PSF</td><td>12.3</td><td>15.2</td><td>21.2</td><td>48.4</td><td>3.3</td><td>1.3</td><td>42.4</td><td>35.3</td></tr><tr><td>Ours w/o PSF *</td><td>14.1</td><td>15.3</td><td>30.6</td><td>48.1</td><td>17.9</td><td>13.0</td><td>42.1</td><td>37.2</td></tr><tr><td>Ours (PSF)</td><td>13.3</td><td>17.9</td><td>30.6</td><td>53.7</td><td>18.2</td><td>19.8</td><td>43.2</td><td>40.0</td></tr></table>
|
| 268 |
+
|
| 269 |
+
Table 7: Quantitative comparison of single mode segmentation, with inconsistent taxonomies, in the "T" setting. *During inference, an additional weights map is adopted in case of inconsistent taxonomies as in Sec. 3.2.6. The detailed performance on inconsistent taxonomies classes is also shown. The mIoU is reported for 19 classes.
|
| 270 |
+
|
| 271 |
+
tent taxonomies classes in Table 7 underlines the effectiveness of our method for the inconsistent taxonomies.
|
| 272 |
+
|
| 273 |
+
# 4.3. Cross-Modal Semantic Segmentation
|
| 274 |
+
|
| 275 |
+
Setup. In the cross-modal semantic segmentation setting, the 2D RGB images from Cityscapes [7], and the 3D LiDAR point clouds from Nuscenes [1] are treated as two different source domains, while the paired but unlabeled 2D RGB images and 3D point clouds from A2D2 [11] are used as the target domain. There are 10 classes in total that need to be transferred to the target domain. In Cityscapes, the label for 6 classes are given, covering road, sidewalk, building, pole, sign and nature. In Nuscenes the labels for 4 classes are given, incl. person, car, truck and bike. The 2D RGB images and 3D point clouds in the target domain are registered via a projection matrix between the 2D pixel and 3D points. Following [18], we adopt U-Net-ResNet34 [31, 14] as the 2D semantic segmentation network, and SparseConvNet [12] for 3D semantic segmentation. Due to the challenge of aligning features for the 3D point clouds, the $\mathrm{A}^3$ module is not included in the cross-modal setting.
|
| 276 |
+
|
| 277 |
+
Comparison with the SOTA. As shown in Table 8, similar to the image classification and the single mode semantic segmentation results, the SOTA cross-modal unsupervised adaptation method xMUDA [18] shows an obvious negative transfer effect, resulting in a performance drop for the 2D model, 3D model and the fused one. Furthermore, we designed reasonable baseline methods for comparison: 1) $\mathrm{ES} + \mathrm{MinEnt}$ : the prediction from 2D and 3D networks are averaged in the target domain through the 2D and 3D point correspondence during training, and the fused prediction probability is optimized using the minimum entropy loss [37]. 2) $\mathrm{ES} + \mathrm{KL}$ : the KL-divergence [18] is utilized to align between the 2D/3D prediction and the fused predictions for the corresponding points in the target domain, resp. 3) xMUDA + AKL: the KL-divergence alignment between 2D and 3D in the target domain is weighted adaptively, to reduce the wrong guidance from the unlabeled parts. 4)
|
| 278 |
+
|
| 279 |
+
<table><tr><td>Cityscapes + Nuscenes → A2D2</td><td>2D</td><td>3D</td><td>Fuse</td></tr><tr><td>Source</td><td>37.5</td><td>2.0</td><td>42.5</td></tr><tr><td>xMUDA[18]</td><td>16.3</td><td>1.7</td><td>9.1</td></tr><tr><td>ES + MinEnt[37]</td><td>22.3</td><td>1.5</td><td>20.8</td></tr><tr><td>ES + KL[18]</td><td>21.7</td><td>1.5</td><td>19.7</td></tr><tr><td>xMUDA + AKL</td><td>27.5</td><td>2.3</td><td>21.1</td></tr><tr><td>xMUDA + AKL + COMP</td><td>32.1</td><td>2.9</td><td>37.7</td></tr><tr><td>Ours w/o PSF</td><td>38.1</td><td>2.4</td><td>49.9</td></tr><tr><td>Ours</td><td>54.9</td><td>37.1</td><td>55.7</td></tr></table>
|
| 280 |
+
|
| 281 |
+
Table 8: Quantitative comparison of cross modal segmentation, Nuscenes+Cityscapes $\rightarrow$ A2D2. "Fuse" represents the average fusion of the prediction probability from 2D models and 3D models; the final class prediction is the maximum of the fused probability. "ES" means 2D and 3D average fusion ensemble. "KL" means KL-divergence alignment. "AKL" means adaptive KL-divergence alignment. "COMP" means complementary condition constraint for the point. The mIoU is reported over 10 classes on A2D2.
|
| 282 |
+
|
| 283 |
+

|
| 284 |
+
(a) A2D2
|
| 285 |
+
|
| 286 |
+

|
| 287 |
+
(b) Ground Truth
|
| 288 |
+
|
| 289 |
+

|
| 290 |
+
(c) 2D (Ours)
|
| 291 |
+
|
| 292 |
+

|
| 293 |
+
(d) 3D (Ours)
|
| 294 |
+
Figure 5: Qualitative results of the cross-modal setting.
|
| 295 |
+
|
| 296 |
+
$\mathrm{xMUDA} + \mathrm{AKL} + \mathrm{COMP}$ : following baseline 3), another constraint, that the weights related to 2D and 3D need to be complementary, is added. It is shown that our method prevents negative transfer without the PSF component, outperforming the non-adapted baseline. Then by adding the PSF module, the 2D and 3D single-model performance is strongly improved, achieving $54.9\%$ and $37.1\%$ , resp. In Fig. 5, we show qualitative results in the target domain. The good performance proves the effectiveness of our method for the mDALU with partial modalities. This opens up the avenue to combine datasets collected with different sensors and offers the possibility of cheaply evaluating new combinations of sensors without annotating their data.
|
| 297 |
+
|
| 298 |
+
# 5. Conclusion
|
| 299 |
+
|
| 300 |
+
In this paper, we proposed the multi-source domain adaptation and label unification with partial datasets problem, called mDALU. Then we proposed a novel multistage approach for mDALU, including partially and fully supervised adaptation stages. Our approach is demonstrated through extensive experiments on different benchmarks.
|
| 301 |
+
|
| 302 |
+
Acknowledgements. This research has received funding from the EU Horizon 2020 research and innovation programme under grant agreement No. 820434. Dengxin Dai is the corresponding author.
|
| 303 |
+
|
| 304 |
+
# References
|
| 305 |
+
|
| 306 |
+
[1] Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. arXiv preprint arXiv:1903.11027, 2019. 8
|
| 307 |
+
[2] Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Michael I. Jordan. Partial transfer learning with selective adversarial networks. In CVPR, 2018. 2
|
| 308 |
+
[3] Zhangjie Cao, Lijia Ma, Mingsheng Long, and Jianmin Wang. Partial adversarial domain adaptation. In ECCV, 2018. 2
|
| 309 |
+
[4] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. TPAMI, 40(4):834-848, 2017. 7
|
| 310 |
+
[5] Minghao Chen, Hongyang Xue, and Deng Cai. Domain adaptation for semantic segmentation with maximum squares loss. In CVPR, 2019. 4
|
| 311 |
+
[6] Yuhua Chen, Wen Li, Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Domain adaptive faster r-cnn for object detection in the wild. In CVPR, 2018. 1
|
| 312 |
+
[7] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In CVPR, 2016. 6, 8
|
| 313 |
+
[8] Koby Crammer, Michael Kearns, and Jennifer Wortman. Learning from multiple sources. JMLR, 9(57):1757-1774, 2008. 2
|
| 314 |
+
[9] Dengxin Dai, Christos Sakaridis, Simon Hecker, and Luc Van Gool. Curriculum model adaptation with synthetic and real data for semantic foggy scene understanding. *IJCV*, 128(5):1182–1204, 2020. 2
|
| 315 |
+
[10] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In ICML, 2015. 2, 5, 6
|
| 316 |
+
[11] Jakob Geyer, Yohannes Kassahun, Mentar Mahmudi, Xavier Ricou, Rupesh Durgesh, Andrew S Chung, Lorenz Hauswald, Viet Hoang Pham, Maximilian Muhlegg, Sebastian Dorn, et al. A2d2: Audi autonomous driving dataset. arXiv preprint arXiv:2004.06320, 2020. 8
|
| 317 |
+
[12] Benjamin Graham, Martin Engelcke, and Laurens van der Maaten. 3d semantic segmentation with submanifold sparse convolutional networks. CVPR, 2018. 8
|
| 318 |
+
[13] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In CVPR, 2020. 1
|
| 319 |
+
[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. 7, 8
|
| 320 |
+
[15] Judy Hoffman, Mehryar Mohri, and Ningshan Zhang. Algorithms and theory for multiple-source adaptation. In NeurIPS, 2018. 2
|
| 321 |
+
[16] Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A. Efros, and Trevor Dar
|
| 322 |
+
|
| 323 |
+
rell. Cycada: Cycle consistent adversarial domain adaptation. In ICML, 2018. 7
|
| 324 |
+
[17] Lukas Hoyer, Dengxin Dai, Yuhua Chen, Adrian Köring, Suman Saha, and Luc Van Gool. Three ways to improve semantic segmentation with self-supervised depth estimation. In CVPR, 2021. 1
|
| 325 |
+
[18] Maximilian Jaritz, Tuan-Hung Vu, Raoul de Charette, Emilie Wirbel, and Patrick Perez. xmuda: Cross-modal unsupervised domain adaptation for 3d semantic segmentation. In CVPR, 2020. 2, 8
|
| 326 |
+
[19] Hu Jian, Hongya Tuo, Chao Wang, Lingfeng Qiao, Haowen Zhong, Yan Junchi, Zhongliang Jing, and Henry Leung. Discriminative partial domain adversarial network. In ECCV, 2020. 2
|
| 327 |
+
[20] Tarun Kalluri, Girish Varma, Manmohan Chandraker, and C.V. Jawahar. Universal semi-supervised semantic segmentation. In ICCV, 2019. 2
|
| 328 |
+
[21] John Lambert, Zhuang Liu, Ozan Sener, James Hays, and Vladlen Koltun. MSeg: A composite dataset for multidomain semantic segmentation. In CVPR, 2020. 2
|
| 329 |
+
[22] Yann LeCun, Corinna Cortes, and CJ Burges. Mnist handwritten digit database. ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist, 2, 2010. 5
|
| 330 |
+
[23] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In ICML, 2015. 2
|
| 331 |
+
[24] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. In NeurIPS workshops, 2011. 5
|
| 332 |
+
[25] Pau Panareda Busto and Juergen Gall. Open set domain adaptation. In ICCV, 2017. 2
|
| 333 |
+
[26] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In ICCV, 2019. 2, 5, 6
|
| 334 |
+
[27] Sayan Rakshit, Dipesh Tamboli, Pragati Shuddhodhan Meshram, Biplab Banerjee, Gemma Roig, and Subhasis Chaudhuri. Multi-source open-set deep adversarial domain adaptation. In ECCV, 2020. 2
|
| 335 |
+
[28] Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. Learning multiple visual domains with residual adapters. In NeurIPS, 2017. 2
|
| 336 |
+
[29] Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. Efficient parametrization of multi-domain deep neural networks. In CVPR, 2018. 2
|
| 337 |
+
[30] Stephan R. Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun. Playing for data: Ground truth from computer games. In ECCV, 2016. 6
|
| 338 |
+
[31] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015. 8
|
| 339 |
+
[32] German Ros, Laura Sellart, Joanna Materzynska, David Vazquez, and Antonio Lopez. The SYNTHIA Dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In CVPR, 2016. 6
|
| 340 |
+
[33] Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, and Tatsuya Harada. Open set domain adaptation by backpropagation. In ECCV, 2018. 2
|
| 341 |
+
|
| 342 |
+
[34] Yi-Hsuan Tsai, Wei-Chih Hung, Samuel Schulter, Kiyuk Sohn, Ming-Hsuan Yang, and Manmohan Chandraker. Learning to adapt structured output space for semantic segmentation. In CVPR, 2018. 4, 7, 8
|
| 343 |
+
[35] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In CVPR, 2017. 2
|
| 344 |
+
[36] Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474, 2014. 2
|
| 345 |
+
[37] Tuan-Hung Vu, Himalaya Jain, Maxime Bucher, Mathieu Cord, and Patrick Pérez. Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation. In CVPR, 2019. 2, 7, 8
|
| 346 |
+
[38] Xudong Wang, Zhaowei Cai, Dashan Gao, and Nuno Vasconcelos. Towards universal object detection by domain attention. In CVPR, 2019. 2
|
| 347 |
+
[39] Ruijia Xu, Ziliang Chen, Wangmeng Zuo, Junjie Yan, and Liang Lin. Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In CVPR, 2018. 2, 6
|
| 348 |
+
[40] Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschlager, and Susanne Saminger-Platz. Central moment discrepancy (cmd) for domain-invariant representation learning. In ICLR, 2017. 2
|
| 349 |
+
[41] Jing Zhang, Zewei Ding, Wanqing Li, and Philip Ogunbona. Importance weighted adversarial nets for partial domain adaptation. In CVPR, 2018. 2
|
| 350 |
+
[42] Han Zhao, Shanghang Zhang, Guanhang Wu, José M. F. Moura, Joao P Costeira, and Geoffrey J Gordon. Adversarial multiple source domain adaptation. In NeurIPS, 2018. 2
|
| 351 |
+
[43] Sicheng Zhao, Bo Li, Xiangyu Yue, Yang Gu, Pengfei Xu, Runbo Hu, Hua Chai, and Kurt Keutzer. Multi-source domain adaptation for semantic segmentation. In NeurIPS, 2019. 2, 7
|
| 352 |
+
[44] Xiangyun Zhao, Samuel Schulter, Gaurav Sharma, Yi-Hsuan Tsai, Manmohan Chandraker, and Ying Wu. Object detection with a unified label space from multiple datasets. In ECCV, 2020. 2, 6
|
| 353 |
+
[45] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networkss. In ICCV, 2017. 7
|
mdalumultisourcedomainadaptationandlabelunificationwithpartialdatasets/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c1e56d5856a9a9b9edbaf3087bfc0aee13c62b8b47aec8d9cf4e4b0d886302a0
|
| 3 |
+
size 561448
|
mdalumultisourcedomainadaptationandlabelunificationwithpartialdatasets/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c441ce7c965ed54a69f68c4c82bfc393c5f9da4af2534474bccc2e78ee8232ba
|
| 3 |
+
size 509873
|
vonmisesfisherlossanexplorationofembeddinggeometriesforsupervisedlearning/2293c123-d314-493b-86a6-7904e80ced43_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:65ceeded5a11bb6a36d965719bd59ca29f00ac17bf28f85e19bc9542e5149da0
|
| 3 |
+
size 83445
|
vonmisesfisherlossanexplorationofembeddinggeometriesforsupervisedlearning/2293c123-d314-493b-86a6-7904e80ced43_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cb97fd78a73d9796baf2b39890da5d4f51c483ac753484a52cad8bfb69f7bf12
|
| 3 |
+
size 106103
|
vonmisesfisherlossanexplorationofembeddinggeometriesforsupervisedlearning/2293c123-d314-493b-86a6-7904e80ced43_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6a2f7ba6a2db9d61455466a0843000c2a028a03cb3315d8300dc7991c6634813
|
| 3 |
+
size 1715672
|
vonmisesfisherlossanexplorationofembeddinggeometriesforsupervisedlearning/full.md
ADDED
|
@@ -0,0 +1,334 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# von Mises-Fisher Loss: An Exploration of Embedding Geometries for Supervised Learning
|
| 2 |
+
|
| 3 |
+
Tyler R. Scott*
|
| 4 |
+
University of Colorado, Boulder
|
| 5 |
+
tysc7237@colorado.edu
|
| 6 |
+
|
| 7 |
+
Andrew C. Gallagher Google Research agallagher@google.com
|
| 8 |
+
|
| 9 |
+
Michael C. Mozer Google Research
|
| 10 |
+
mcmozer@gmail.com
|
| 11 |
+
|
| 12 |
+
# Abstract
|
| 13 |
+
|
| 14 |
+
Recent work has argued that classification losses utilizing softmax cross-entropy are superior not only for fixed-set classification tasks, but also by outperforming losses developed specifically for open-set tasks including few-shot learning and retrieval. Softmax classifiers have been studied using different embedding geometries—Euclidean, hyperbolic, and spherical—and claims have been made about the superiority of one or another, but they have not been systematically compared with careful controls. We conduct an empirical investigation of embedding geometry on softmax losses for a variety of fixed-set classification and image retrieval tasks. An interesting property observed for the spherical losses lead us to propose a probabilistic classifier based on the von Mises-Fisher distribution, and we show that it is competitive with state-of-the-art methods while producing improved out-of-the-box calibration. We provide guidance regarding the trade-offs between losses and how to choose among them.
|
| 15 |
+
|
| 16 |
+
# 1. Introduction
|
| 17 |
+
|
| 18 |
+
Almost on a weekly basis, novel loss functions are proposed that claim superiority over standard losses for supervised learning in vision. At a coarse level, these loss functions can be divided into classification based and similarity based. Classification-based losses [5, 7, 10, 12, 14, 16, 36, 37, 38, 44, 45, 46, 58, 60, 61, 66, 67, 68, 77] have generally been applied to fixed-set classification tasks (i.e., tasks in which the set of classes in training and testing is identical). The prototypical classification-based loss uses a softmax function to map an embedding to a probability distribution over classes, which is then evaluated with cross-entropy [5]. Similarity-based losses [2, 6, 9, 15, 17, 20, 23, 24, 25, 26, 32, 34, 35, 40, 41, 47, 48, 51, 54, 55, 56, 57, 59, 62, 63, 64, 69, 70, 71, 72, 74, 76, 78] have been designed specif
|
| 19 |
+
|
| 20 |
+
ically for open-set tasks, which include retrieval and few-shot learning. Open-set tasks refer to situations in which the classes at testing are disjoint from, or sometimes a superset of, those available at training. The prototypical similarity-based method is the triplet loss which discovers embeddings such that an instance is closer to instances of the same class than to instances of different classes [51, 72].
|
| 21 |
+
|
| 22 |
+
Recent efforts to systematically compare losses support a provocative hypothesis: on open-set tasks, classification-based losses outperform similarity-based losses by leveraging embeddings in the layer immediately preceding the logits [4, 39, 58, 61, 77]. The apparent advantage of classifiers stems from the fact that similarity losses require sampling informative pairs, triplets, quadruplets, or batches of instances in order to train effectively [4, 14, 38, 66, 67, 68, 77]. However, all classification losses are not equal, and we find systematic differences among them with regard to a fundamental choice: the embedding geometry, which determines the similarity structure of the embedding space.
|
| 23 |
+
|
| 24 |
+
Classification losses span three embedding geometries: Euclidean, hyperbolic, and spherical. Although some comparisons have been made between geometries, the comparisons have not been entirely systematic and have not covered the variety of supervised tasks. We find this fact somewhat surprising given the many large-scale comparisons of loss functions. Furthermore, the comparisons that have been made appear to be contradictory. The face verification community has led the push for spherical losses, claiming superiority of spherical over Euclidean. However, this work is limited to open-set face-related tasks [14, 36, 45, 46, 66, 67, 68]. The deep metric-learning community has recently refocused its attention to classification losses, but it is unclear from empirical comparisons whether the best-performing geometry is Euclidean or spherical [4, 39, 44, 77]. Independently, Khrulkov et al. [25] show that a hyperbolic prototypical network is a strong performer on common few-shot learning benchmarks, and additionally a hyperbolic softmax classifier outperforms the Euclidean variant on person re-identification.
|
| 25 |
+
|
| 26 |
+
Unfortunately, these results are in contention with Tian et al. [61], where the authors claim a simple Euclidean softmax classifier learns embedding that are superior for few-shot learning.
|
| 27 |
+
|
| 28 |
+
One explanation for the discrepant claims are confounds that make it impossible to determine whether the causal factor for the superiority of one loss over another is embedding geometry or some other ancillary aspect of the loss. Another explanation is that each bit of research examines only a subset of losses or a subset of datasets. Also, as pointed out in Musgrave et al. [39], experimental setups (e.g., using the test set as a validation signal, insufficient hyperparameter tuning, varying forms of data augmentation) make it difficult to trust and reproduce published results. The goal of our work is to take a step toward rigor by reconciling differences among classification losses on both fixed-set and image-retrieval benchmarks.
|
| 29 |
+
|
| 30 |
+
As discussed in more detail in Section 2.3, our investigations led us to uncover an interesting property of spherical losses, which in turn suggested a probabilistic spherical classifier based on the von Mises-Fisher distribution. While our loss is competitive with state-of-the-art alternatives and produces improved out-of-the-box calibration, we avoid unequivocal claims about its superiority. We do, however, believe that it improves on previously proposed stochastic classifiers (e.g., [41, 52]), in, for example, its ability to scale to higher-dimensional embedding spaces.
|
| 31 |
+
|
| 32 |
+
Contributions. In our work, (1) we characterize classification losses in terms of embedding geometry, (2) we systematically compare classification losses in a well-controlled setting on a range of fixed- and open-set tasks, examining both accuracy and calibration, (3) we reach the surprising conclusion that spherical losses generally outperform the standard softmax cross-entropy loss that is used almost exclusively in practice, (4) we propose a stochastic spherical loss based on von Mises-Fisher distributions, scale it to larger tasks and representational spaces than previous stochastic losses, and show that it can obtain state-of-the-art performance with significantly lower calibration error, and (5) we discuss trade-offs between losses and factors to consider when choosing among them.
|
| 33 |
+
|
| 34 |
+
# 2. Classification Losses
|
| 35 |
+
|
| 36 |
+
We consider classification losses that compute the cross-entropy between a predicted class distribution and a one-hot target distribution (or equivalently, as the negative log-likelihood under the model of the target class). The geometry determines the specific mapping from a deep embedding to a class posterior, and in a classification loss, this mapping is determined by a set of parameters learned via gradient descent. We summarize the three embedding geometries that serve to differentiate classification losses.
|
| 37 |
+
|
| 38 |
+
# 2.1. Euclidean
|
| 39 |
+
|
| 40 |
+
Euclidean embeddings lie in an $n$ -dimensional real-valued space (i.e., $\mathbb{R}^n$ or sometimes $\mathbb{R}_+^n$ ). The commonly-used dot-product softmax [5], which we refer to as STAN-DARD, has the form:
|
| 41 |
+
|
| 42 |
+
$$
|
| 43 |
+
p (y | \boldsymbol {z}) = \frac {\exp \left(\boldsymbol {w} _ {y} ^ {\mathrm {T}} \boldsymbol {z}\right)}{\sum_ {j} \exp \left(\boldsymbol {w} _ {j} ^ {\mathrm {T}} \boldsymbol {z}\right)}, \tag {1}
|
| 44 |
+
$$
|
| 45 |
+
|
| 46 |
+
where $\mathbf{z}$ is an embedding and $\mathbf{w}_j$ are weights for class $j$ . The dot product is a measure of similarity in Euclidean space, and is related to the Euclidean distance by $||\mathbf{w}_j - \mathbf{z}||^2 = ||\mathbf{w}_j||^2 + ||\mathbf{z}||^2 - 2\mathbf{w}_j^{\mathrm{T}}\mathbf{z}$ . (Classifiers using Euclidean distance have been explored, but gradient-based training methods suffer from the curse of dimensionality because gradients go to zero when all points are far from one another. Prototypical networks [54] do succeed using a Euclidean distance posterior, but the weights are determined by averaging instance embeddings, not gradient descent.)
|
| 47 |
+
|
| 48 |
+
# 2.2. Hyperbolic
|
| 49 |
+
|
| 50 |
+
We follow Ganea et al. [16] and Khrulkov et al. [25] and consider the Poincaré ball model of hyperbolic geometry defined as $\mathbb{D}_c^n = \{\pmb {z}\in \mathbb{R}^n:c\| \pmb {z}\| ^2 < 1,c\geq 0\}$ where $c$ is a hyperparameter controlling the curvature of the ball. Embeddings thus lie inside a hypersphere of radius $1 / \sqrt{c}$ . To perform multi-class classification, we employ the hyperbolic softmax generalization derived in [16], hereafter HYPERBOLIC:
|
| 51 |
+
|
| 52 |
+
$$
|
| 53 |
+
\begin{array}{l} p (y | z) \propto \\ \exp \left(\frac {\lambda_ {\boldsymbol {p} _ {y}} ^ {c} \| \boldsymbol {a} _ {y} \|}{\sqrt {c}} \sinh^ {- 1} \left(\frac {2 \sqrt {c} \langle - \boldsymbol {p} _ {y} \oplus_ {c} \boldsymbol {z} , \boldsymbol {a} _ {y} \rangle}{\left(1 - c \| - \boldsymbol {p} _ {y} \oplus_ {c} \boldsymbol {z} \| ^ {2}\right) \| \boldsymbol {a} _ {y} \|}\right)\right), \\ \end{array}
|
| 54 |
+
$$
|
| 55 |
+
|
| 56 |
+
where $\pmb{p}_j\in \mathbb{D}_c^n$ and $a_{j}\in T_{p_{j}}\mathbb{D}_{c}^{n}\setminus \{\mathbf{0}\}^{1}$ are learnable parameters for class $j,\lambda_{p_j}^c$ is the conformal factor of $\pmb {p}_j$ $\langle .\rangle$ is the dot product, and $\oplus_{c}$ is the Mobius addition operator. Further details can be found in [16, 25].
|
| 57 |
+
|
| 58 |
+
# 2.3. Spherical
|
| 59 |
+
|
| 60 |
+
Spherical embeddings lie on the surface of an $n$ -dimensional unit-hypersphere (i.e., $\mathbb{S}^{n-1}$ ). The traditional loss, hereafter COSINE, uses cosine similarity [77]:
|
| 61 |
+
|
| 62 |
+
$$
|
| 63 |
+
p (y | \boldsymbol {z}) = \frac {\exp (\beta \cos \theta_ {y})}{\sum_ {j} \exp (\beta \cos \theta_ {j})}, \tag {3}
|
| 64 |
+
$$
|
| 65 |
+
|
| 66 |
+
where $\beta > 0$ is an inverse-temperature parameter, $\|z\| = 1$ , $\|\pmb{w}_j\| = 1 \forall j$ , and $\theta_j$ is the angle between $z$ and $\pmb{w}_j$ . Note
|
| 67 |
+
|
| 68 |
+
that, in contrast to STANDARD, the $\ell_2$ -norms are factored out of the weight vectors and embeddings, thus only the direction determines class association.
|
| 69 |
+
|
| 70 |
+
Many variants of COSINE have been proposed, particularly in the face verification community [14, 36, 45, 46, 66, 67, 68], some of which are claimed to be superior. For completeness, we also experiment with ArcFace [14], one of the top-performing variants, hereafter ARCFACE:
|
| 71 |
+
|
| 72 |
+
$$
|
| 73 |
+
p (y \mid z) = \frac {\exp (\beta \cos \left(\theta_ {y} + m\right))}{\exp (\beta \cos \left(\theta_ {y} + m\right)) + \sum_ {j \neq y} \exp (\beta \cos \theta_ {j})}, \tag {4}
|
| 74 |
+
$$
|
| 75 |
+
|
| 76 |
+
where $m \geq 0$ is an additive-angular-margin hyperparameter penalizing the true class. (Note that we are coloring the loss name by geometry; COSINE and ARCFACE are both spherical losses.)
|
| 77 |
+
|
| 78 |
+
Early in our investigations, we noticed an interesting property of spherical losses: $\|z\|$ encodes information about uncertainty or ambiguity. For example, the left and right frames of Figure 1 show MNIST [33] test images that, when trained with COSINE, produce embeddings that have small and large $\ell_2$ -norms, respectively. This result is perfectly intuitive for STANDARD since the norm affects the confidence or peakedness of the class posterior distribution (verified in [42, 46]), but for COSINE, the norm has absolutely no effect on the posterior. Because the norm is factored out by the cosine similarity, there is no force on the model during training to reflect the ambiguity of an instance in the norm. Despite ignoring it, the COSINE model better discriminates correct versus incorrect predictions with the norm than does the STANDARD model (see COSINE and STANDARD rows of Table 1; note that the row for VMF corresponds to a loss we introduce in the next section).
|
| 79 |
+
|
| 80 |
+
Why does the COSINE embedding convey a confidence signal in the norm? One intuition is that when an instance is ambiguous, it could be assigned many different labels in the training set, each pulling the instance's embedding in different directions. If these directions roughly cancel, the embedding will be pulled to the origin.
|
| 81 |
+
|
| 82 |
+
Due to COSINE having claimed advantages over STANDARD, and also discarding important information conveyed by $\| z\|$ , we sought to develop a variant of COSINE that uses the $\ell_2$ -norm to explicitly represent uncertainty in the embedding space, and thus to inform the classification decision. We refer to this variant as the von Mises-Fisher loss or vMF.
|
| 83 |
+
|
| 84 |
+
# 2.3.1 von Mises-Fisher Loss
|
| 85 |
+
|
| 86 |
+
The von Mises-Fisher (vMF) distribution is the maximum-entropy distribution on the surface of a hypersphere, parameterized by a mean unit vector, $\mu$ , and isotropic concentra
|
| 87 |
+
|
| 88 |
+

|
| 89 |
+
Figure 1. MNIST test images corresponding to embeddings with the (left) smallest $\|z\|$ and (right) largest $\|z\|$ ; trained with CO-SINE. The left grid clearly contains "noisier" or unorthodox digits.
|
| 90 |
+
|
| 91 |
+

|
| 92 |
+
|
| 93 |
+
<table><tr><td></td><td>MNIST</td><td>Fashion MNIST</td><td>CIFAR10</td><td>CIFAR100</td></tr><tr><td>STANDARD</td><td>0.92</td><td>0.84</td><td>0.84</td><td>0.66</td></tr><tr><td>HYPERBOLIC</td><td>0.91</td><td>0.81</td><td>0.87</td><td>0.70</td></tr><tr><td>COSINE</td><td>0.93</td><td>0.84</td><td>0.90</td><td>0.84</td></tr><tr><td>ARCFACE</td><td>0.95</td><td>0.89</td><td>0.90</td><td>0.80</td></tr><tr><td>VMF</td><td>0.97</td><td>0.88</td><td>0.82</td><td>0.80</td></tr></table>
|
| 94 |
+
|
| 95 |
+
Table 1. The mean AUROC indicating how well the norm of an embedding, $\| \boldsymbol{z} \|$ , discriminates correct and incorrect classifier outputs for five losses (rows) and four data sets (columns). Chance is 0.5; perfect is 1.0. Boldface indicates the highest value. Error bars are negligible across five replications. Although the embedding norm signals classifier accuracy for all losses, spherical losses yield the strongest signal.
|
| 96 |
+
|
| 97 |
+
tion, $\kappa$ . The pdf for an $n$ -dimensional unit vector $\pmb{x}$ is:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
p (\boldsymbol {x}; \boldsymbol {\mu}, \kappa) = C _ {n} (\kappa) \exp (\kappa \boldsymbol {\mu} ^ {\mathrm {T}} \boldsymbol {x}) \text {w i t h}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
C _ {n} (\kappa) = \frac {\kappa^ {n / 2 - 1}}{(2 \pi) ^ {n / 2} I _ {n / 2 - 1} (\kappa)}, \tag {5}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
where $\pmb{x},\pmb{\mu}\in \mathbb{S}^{n - 1}$ $\kappa \geq 0$ , and $I_{v}$ denotes the modified Bessel function of the first kind at order $v$
|
| 108 |
+
|
| 109 |
+
The von Mises-Fisher loss, hereafter vMF, uses the same form of the posterior as COSINE (Equation 3), although $z$ and $\{w_j\}$ are now vMF random variables, defined in terms of the deterministic output of the network, $\tilde{z}$ , and the learnable weight vector for each class $j$ , $\tilde{w}_j$ :
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
z \sim \mathrm {v M F} \left(\boldsymbol {\mu} = \frac {\tilde {z}}{\| \tilde {z} \|}, \kappa = \| \tilde {z} \|\right), \tag {6}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\boldsymbol {w} _ {j} \sim \mathrm {v M F} \left(\boldsymbol {\mu} = \frac {\tilde {\boldsymbol {w}} _ {j}}{\| \tilde {\boldsymbol {w}} _ {j} \|}, \kappa = \| \tilde {\boldsymbol {w}} _ {j} \|\right).
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
The norm $\|\cdot\|$ directly controls the spread of the distribution with a zero norm yielding a uniform distribution over the hypersphere's surface. The loss remains the negative log-likelihood under the target class, but in contrast to COSINE, it is necessary to marginalize over the embedding and
|
| 120 |
+
|
| 121 |
+
weight-vector uncertainty:
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\begin{array}{l} \mathcal {L} (y, z; \boldsymbol {w} _ {1: Y}) = \mathbb {E} _ {\boldsymbol {z}, \boldsymbol {w} _ {1: Y}} [ - \log p (y | \boldsymbol {z}, \boldsymbol {w} _ {1: Y}) ] \\ = \mathbb {E} _ {\boldsymbol {z}, \boldsymbol {w} _ {1: Y}} \left[ - \log \frac {\exp (\beta \cos \theta_ {y}))}{\sum_ {j} \exp (\beta \cos \theta_ {j})} \right], \tag {7} \\ \end{array}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
where $Y$ is the total number of classes in the training set. Applying Jensen's inequality, we obtain an upper-bound on $\mathcal{L}$ which allows us to marginalize over the $\{w_{j}\}$ and obtain a form expressed in terms of an expectation over $z$ :
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
\begin{array}{l} \mathcal {L} (y, z; \boldsymbol {w} _ {1: Y}) \leq \\ \mathbb {E} _ {\boldsymbol {z}} \left[ \log \left(\sum_ {j} \exp \left(\log C _ {n} \left(\| \tilde {\boldsymbol {w}} _ {j} \right\|\right) \right. \right. \tag {8} \\ \left. \left. - \log C _ {n} (\| \tilde {\boldsymbol {w}} _ {j} + \beta \boldsymbol {z} \|)\right) \right] - \beta \mathbb {E} [ \boldsymbol {w} _ {y} ] \mathbb {E} [ \boldsymbol {z} ], \\ \end{array}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
where $\mathbb{E}[z] = (I_{n / 2}(\kappa) / I_{n / 2 - 1}(\kappa))\pmb{\mu}_{\pmb{z}}$ . This objective can be approximated by sampling only from $\pmb{z}$ and we find that during both training and testing, 10 samples is sufficient. At test time, vMF approximates $\mathbb{E}_{\pmb{z},\pmb{w}_{1:Y}}[p(y|\pmb{z},\pmb{w}_{1:Y})]$ using Monte Carlo samples from each of $\pmb{z}$ and $\{\pmb{w}_j\}$ .
|
| 134 |
+
|
| 135 |
+
To sample, we make use of a rejection-sampling reparameterization trick [11]. However, [11] computes modified Bessel functions on the CPU with manually-defined gradients for backpropagation, substantially slowing both the forward and backwards passes through the network. Instead, we borrow tight bounds for $I_{n/2}(\kappa) / I_{n/2-1}(\kappa)$ from [50] and $\log C_n(\kappa)$ from [31], which together make Equation 8 efficient and tractable to compute. We find that the rejection sampler is stable and efficient in embedding spaces up to at least 512D, adding little overhead to the training and testing procedures (see Appendix G). A full derivation of the loss is provided in Appendix A. Additional details regarding the bounds for $I_{n/2}(\kappa) / I_{n/2-1}(\kappa)$ and $\log C_n(\kappa)$ can be found in Appendix B.
|
| 136 |
+
|
| 137 |
+
The network fails to train when the initial $\{\tilde{w}_j\}$ are chosen using standard initializers, particularly in higher dimensional embedding spaces. We discovered the failure to be due to near-zero gradients for the ratio of modified Bessel functions when the vector norms are small (see flat slope in Figure 2 for small $\kappa$ ). We derived a dimension-equivariant initialization scheme (see Appendix C) that produces norms that yield a strong enough gradient for training. We also include a fixed scale-factor on the embedding, $\tilde{z}$ , for the same reason. The points in Figure 2 show the scaling of initial parameters our scheme produces for various embedding dimensionalities, designed to ensure the ratio of modified Bessel functions has a constant value of 0.4. The expected norms are plotted as individual points with matching color
|
| 138 |
+
|
| 139 |
+

|
| 140 |
+
Figure 2. The ratio of modified Bessel functions versus $\kappa$ for various embedding dimensionalities (colored curves). Our initializer for $\kappa$ ensures the ratio of modified Bessel functions is constant regardless of the dimensionality. The value of $\kappa$ provided by the initializer for each dimensionality is plotted as a single point. A perfect initializer would ensure the point sits exactly on the matching-colored curve. For this simulation, we initialized such that the y-axis had a constant value of 0.4.
|
| 141 |
+
|
| 142 |
+

|
| 143 |
+
Figure 3. Cars196 test images corresponding to vMF embeddings with the (left) smallest $\kappa_z$ and (right) largest $\kappa_z$ . Instances that are more difficult to classify or ambiguous correspond to small $\kappa_z$ .
|
| 144 |
+
|
| 145 |
+

|
| 146 |
+
|
| 147 |
+
and we find they produce a near-perfect fit for greater than 8D, lying on-top of their corresponding curves.
|
| 148 |
+
|
| 149 |
+
To demonstrate that vMF learns explicit uncertainty structure, we train it on Cars196 [28], a dataset where the class is determined by the make, model, and year of a photographed car. In Figure 3, we present images corresponding to embeddings whose distributions have the most uncertainty (smallest $\kappa_z$ ) and least uncertainty (largest $\kappa_z$ ) in the test set. vMF behaves quite sensibly: the most uncertain embeddings correspond to images of cars that are far from the camera or at poses where it's difficult to extract the make, model, and year; and the most certain embeddings correspond to images of cars close to the camera in a neutral pose.
|
| 150 |
+
|
| 151 |
+

|
| 152 |
+
Figure 4. 3D embeddings of the MNIST test set for each of the five classification variants. Instances are colored by their ground-truth class. The plotted instances for VMF correspond to $\mu_{z}$ . Note that HYPERBOLIC, COSINE, ARCFACE, and VMF are showing embeddings prior to the normalization/projection step. Best viewed in color.
|
| 153 |
+
|
| 154 |
+

|
| 155 |
+
|
| 156 |
+

|
| 157 |
+
|
| 158 |
+

|
| 159 |
+
|
| 160 |
+

|
| 161 |
+
|
| 162 |
+
<table><tr><td></td><td>MNIST</td><td>Fashion
|
| 163 |
+
MNIST</td><td>CIFAR10</td><td>CIFAR100</td></tr><tr><td>STANDARD</td><td>98.92 ± 0.03</td><td>90.31 ± 0.12</td><td>94.13 ± 0.05</td><td>69.21 ± 0.18</td></tr><tr><td>HYPERBOLIC</td><td>98.92 ± 0.03</td><td>90.31 ± 0.13</td><td>94.11 ± 0.05</td><td>69.85 ± 0.07</td></tr><tr><td>COSINE</td><td>98.99 ± 0.03</td><td>90.39 ± 0.09</td><td>93.99 ± 0.10</td><td>70.57 ± 0.54</td></tr><tr><td>ARCFACE</td><td>99.13 ± 0.02</td><td>90.73 ± 0.12</td><td>94.15 ± 0.05</td><td>69.08 ± 0.57</td></tr><tr><td>VMF</td><td>99.02 ± 0.04</td><td>90.82 ± 0.14</td><td>94.00 ± 0.12</td><td>69.94 ± 0.18</td></tr></table>
|
| 164 |
+
|
| 165 |
+
Table 2. Mean classification accuracy $(\%)$ of each loss across four fixed-set classification tasks. Error bars represent $\pm 1$ standard-error of the mean. Boldface indicates the best-performing loss(es). Note that on average, the three spherical losses outperform HYPERBOLIC and STANDARD.
|
| 166 |
+
|
| 167 |
+
# 3. Experimental Results
|
| 168 |
+
|
| 169 |
+
We experiment with four fixed-set classification datasets—MNIST [33], FashionMNIST [75], CIFAR10 [29], and CIFAR100 [29]—as well as three common datasets for image retrieval—Cars196 [28], CUB200-2011 [65], and Stanford Online Products (SOP) [57]. MNIST and FashionMNIST are trained with 3D embeddings, CIFAR10 and CIFAR100 with 128D embeddings, and all open-set datasets with 512D embeddings. We perform a hyperparameter search for all losses on each dataset. The hyperparameters associated with the best performance on the validation set are then used to train five replications of the method. Reported test performance represents the average over the five replications. Additional details including the network architecture, dataset details, and hyperparameters are included in Appendix D.
|
| 170 |
+
|
| 171 |
+
# 3.1. Fixed-Set Classification
|
| 172 |
+
|
| 173 |
+
We begin by comparing representations learned by the five losses we described: STANDARD, HYPERBOLIC, COSINE, ARCFACE, and VMF. The latter three have spherical geometries. ARCFACE is a minor variant of COSINE claimed to be a top performer for face verification [14]. VMF is our probabilistic extension of COSINE. Using a 3D embedding on MNIST, we observe decreasing intra-class angular variance for the losses appearing from left to right in Figure 4. The intra-class variance is related to inter-class discriminability, as the 10 classes are similarly dispersed for all losses. The three losses with spherical geometry obtain the lowest variance, with ARCFACE lower than COSINE due
|
| 174 |
+
|
| 175 |
+
to a margin hyperparameter designed to penalize intra-class variance; and vMF achieves the same, if not lower variance still, as a natural consequence of uncertainty reduction.
|
| 176 |
+
|
| 177 |
+
The test accuracy for each of the fixed-set classification datasets and the five losses is presented in Table 2. Across all datasets, spherical losses outperform STANDARD and HYPERBOLIC. Among the spherical losses, ARCFACE and COSINE are deficient on at least one dataset, whereas VMF is a consistently strong performer.
|
| 178 |
+
|
| 179 |
+
Table 3 presents the top-label expected calibration error (ECE) for each dataset and loss. The top-label expected calibration error approximates the disparity between a model's confidence output, $\max_y p(y|z)$ , and the ground-truth likelihood of being correct [30, 49]. The left four columns show out-of-the-box ECE on the test set (i.e., prior to any post-hoc calibration). vMF has significantly reduced ECE compared to other losses, with relative error reductions of $40 - 70\%$ for FashionMNIST, CIFAR10, and CIFAR100. The right four columns show ECE after applying post-hoc temperature scaling [19]. STANDARD and HYPERBOLIC greatly benefit from temperature scaling, with STANDARD exhibiting the lowest calibration error.
|
| 180 |
+
|
| 181 |
+
Post-hoc calibration requires a validation set, but many settings cannot afford the data budget to reserve a sizeable validation set, which makes out-of-the-box calibration a desirable property. For example, in few-shot and transfer learning, one may not have enough data in the target domain to both fine tune the classifier and validate.
|
| 182 |
+
|
| 183 |
+
Temperature scaling is not as effective when applied to spherical losses as when applied to STANDARD and HYPERBOLIC. The explanation, we hypothesize, is that the spher
|
| 184 |
+
|
| 185 |
+
<table><tr><td rowspan="2"></td><td colspan="4">ECE</td><td colspan="4">ECE with Temperature Scaling</td></tr><tr><td>MNIST</td><td>Fashion MNIST</td><td>CIFAR10</td><td>CIFAR100</td><td>MNIST</td><td>Fashion MNIST</td><td>CIFAR10</td><td>CIFAR100</td></tr><tr><td>STANDARD</td><td>2.4 ± 0.2</td><td>12.4 ± 0.8</td><td>8.8 ± 0.1</td><td>20.6 ± 0.2</td><td>0.4 ± 0.1</td><td>5.5 ± 1.3</td><td>2.7 ± 0.2</td><td>2.2 ± 0.1</td></tr><tr><td>HYPERBOLIC</td><td>2.8 ± 0.1</td><td>13.2 ± 0.4</td><td>8.9 ± 0.1</td><td>22.0 ± 0.2</td><td>1.4 ± 0.1</td><td>7.2 ± 0.7</td><td>2.8 ± 0.1</td><td>2.6 ± 0.1</td></tr><tr><td>COSINE</td><td>1.8 ± 0.2</td><td>7.9 ± 0.5</td><td>8.9 ± 0.2</td><td>21.8 ± 1.2</td><td>1.6 ± 0.1</td><td>4.2 ± 0.1</td><td>6.4 ± 0.2</td><td>10.8 ± 0.8</td></tr><tr><td>ARCFACE</td><td>2.3 ± 0.1</td><td>11.0 ± 0.5</td><td>10.0 ± 0.1</td><td>26.4 ± 0.4</td><td>1.7 ± 0.1</td><td>7.2 ± 0.4</td><td>8.7 ± 0.1</td><td>15.7 ± 0.2</td></tr><tr><td>VMF</td><td>1.6 ± 0.1</td><td>4.2 ± 0.5</td><td>5.9 ± 0.2</td><td>7.9 ± 0.3</td><td>1.5 ± 0.1</td><td>5.0 ± 0.2</td><td>5.3 ± 0.2</td><td>8.0 ± 0.2</td></tr></table>
|
| 186 |
+
|
| 187 |
+
Table 3. Mean expected calibration error (\%), computed with 15 equal-mass bins, before post-hoc calibration (leftmost four columns) and after temperature scaling (rightmost four columns) across the four fixed-set classification tasks. Error bars represent $\pm 1$ standard-error of the mean. Boldface indicates the loss(es) with the lowest error.
|
| 188 |
+
|
| 189 |
+
<table><tr><td></td><td>Cars196</td><td>CUB200-2011</td><td>SOP</td></tr><tr><td>STANDARD</td><td>21.3 ± 0.2</td><td>20.0 ± 0.2</td><td>39.7 ± 0.1</td></tr><tr><td>+ COSINE AT TEST</td><td>23.2 ± 0.1</td><td>21.4 ± 0.2</td><td>42.1 ± 0.1</td></tr><tr><td>HYPERBOLIC</td><td>22.9 ± 0.3</td><td>20.1 ± 0.3</td><td>41.0 ± 0.2</td></tr><tr><td>+ COSINE AT TEST</td><td>25.0 ± 0.4</td><td>21.8 ± 0.2</td><td>44.0 ± 0.1</td></tr><tr><td>COSINE</td><td>24.6 ± 0.4</td><td>22.8 ± 0.1</td><td>44.3 ± 0.1</td></tr><tr><td>ARCFACE</td><td>27.4 ± 0.2</td><td>23.1 ± 0.3</td><td>40.8 ± 0.3</td></tr><tr><td>VMF</td><td>27.2 ± 0.1</td><td>22.1 ± 0.1</td><td>38.3 ± 0.2</td></tr></table>
|
| 190 |
+
|
| 191 |
+
Table 4. Mean mAP@R (%) across the three open-set image retrieval tasks. Error bars represent $\pm 1$ standard-error of the mean. Boldface indicates the best-performing loss(es). “+ Cosine at Test” replaces the default metric for the geometry (i.e., Euclidean for STANDARD and Poincaré for HYPERBOLIC) with cosine distance to compare instances.
|
| 192 |
+
|
| 193 |
+
ical losses incorporate a learned temperature parameter $\beta$ (which we discuss below), which is unraveled by the calibration temperature. We leave it as an open question for how to properly post-hoc calibrate spherical losses.
|
| 194 |
+
|
| 195 |
+
# 3.2. Open-Set Retrieval
|
| 196 |
+
|
| 197 |
+
For open-set retrieval, we follow the data preprocessing pipeline of [4] where each dataset is first split into a train set and test set with disjoint classes. We additionally split off $15\%$ of the training classes for a validation set, a decision that has been left out of many training procedures of similarity-based losses [39]. We evaluate methods using mean-average-precision at R (mAP@R), a metric shown to be more informative than Recall@1 [39]. For the stochastic loss, vMF, we compute $\mathbb{E}_{z_{1:N}}[\mathrm{mAP}@\mathrm{R}(z_{1:N},y_{1:N})]$ where $N$ is the number of test instances.
|
| 198 |
+
|
| 199 |
+
Table 4 presents the retrieval performance for each loss. As with fixed-set classification, there is no consistent winner across all datasets, but spherical losses tend to outperform STANDARD and HYPERBOLIC. Boudiaf et al. [4] find that retrieval performance can be improved for STANDARD by employing cosine distance at test time. Although no principled explanation is provided, we note from Figure 4 that $\| z\|$ introduces a large source of intra-class variance in STANDARD and HYPERBOLIC. This variance is factored out automatically by the spherical losses. As shown in Table 4, cosine distance at test improves both STANDARD and HYPERBOLIC, though not to the level of the best performing spherical loss.
|
| 200 |
+
|
| 201 |
+
In contrast to other stochastic losses [41, 52], VMF scales to high-dimensional embeddings—512D in this case—and can be competitive with state-of-the-art. However, it has the worst performance on SOP which has 9,620 training classes; many more than all other datasets. In Section 2.3.1, we mentioned that to marginalize out the weight distributions, we successively apply Jensen's inequality to their expectations. The decision resulted in a tractable loss, but it is an upper-bound on the true loss and the bound very likely becomes loose as the number of training classes increases. We hypothesize the inferior performance is due to this design choice, and despite experimentation with curriculum learning techniques to condition on a subset of classes during training, results did not improve.
|
| 202 |
+
|
| 203 |
+
# 3.3. Role of Temperature
|
| 204 |
+
|
| 205 |
+
Due to spherical losses using cosine similarity, the logits are bounded in $[-1, 1]$ . Consequently, it is necessary to scale the logits to a more suitable range. In past work, spherical losses have incorporated an inverse-temperature constant, $\beta > 0$ [14, 66, 68, 77]. Past efforts to turn $\beta$ into a trainable parameter find that it either does not work as well as fixing it, or no comparison is made to a fixed value [45, 46, 67].
|
| 206 |
+
|
| 207 |
+
In COSINE, ARCFACE, and VMF, we compare a fixed $\beta$ to a trained $\beta$ , using fixed values common in the literature. Under a suitable initialization and parameterization of temperature, the trained $\beta$ performs at least as well as a fixed value and avoids the manual search (Figure 5). In partic
|
| 208 |
+
|
| 209 |
+

|
| 210 |
+
Figure 5. Comparison between a learned temperature and various values of a fixed temperature on (left) CIFAR100 and (right) Cars196. Learning the temperature performs at least as well as fixing it, with exception to vMF on Cars196.
|
| 211 |
+
|
| 212 |
+
ular, rather than performing constrained optimization in $\beta$ we perform unconstrained optimization in $\tau = \log (\beta)$ .Details can be found in Appendix E.
|
| 213 |
+
|
| 214 |
+
# 4. Related Work
|
| 215 |
+
|
| 216 |
+
In Section 1, we dichotomized the literature in a coarse manner based on whether losses are similarity based or classification based. In this section, we focus on recent work relevant to the vMF, including losses functions using vMF distributions and stochastic classifiers.
|
| 217 |
+
|
| 218 |
+
A popular use of vMF distributions in machine learning has been clustering [3, 18]. Banerjee et al. [3] consider a vMF mixture model trained with expectation-maximization, and Gopal and Yang [18] propose a fully-Bayesian extension along with hierarchical and temporal versions trained with variational inference. In addition, vMF distributions have begun being applied in supervised settings. Hasnat et al. [21] suggest a supervised classifier for face verification where each term in the softmax is interpreted as the probability density of a vMF distribution. Zhe et al. [79] propose a similarity-based loss that is functionally identical to [21], except the mean parameters of the vMF distributions for each class are estimated as the maximum-likelihood estimate of the training data. Park et al. [43] describe the spherical analog to the prototypical network [54], but use a generator network to output the prototypes. The downside of these supervised losses compared to vMF is they assume the concentration parameter across all classes is identical and fixed, canceling $C_n(\kappa)$ in the softmax. Such a decision is mathematically convenient, but removes a significant amount of flexibility in the model. Davidson et al. [11] propose a variational autoencoder (VAE) with hyperspherical latent structure by using a vMF distribution. They find it can outperform a Gaussian VAE, but only in lower-dimensional latent spaces. We leverage the rejection-sampling reparameterization scheme they compose to train vMF.
|
| 219 |
+
|
| 220 |
+
Other work has sought losses that consider either
|
| 221 |
+
|
| 222 |
+
stochastic embeddings or stochastic logits, but they suggest Gaussians instead of vMFs. Chang et al. [7] suppose stochastic embeddings, but deterministic classification weights, and train a classifier using Monte Carlo samples. They also add a KL-divergence regularizer between the embedding distribution and a zero-mean, unit-variance Gaussian. Collier et al. [10] propose a classification loss with stochastic logits that uses a temperature-parameterized softmax. They show it can train under the influence of heteroscedastic label noise with improved accuracy and calibration. Shi and Jain [53] convert deterministic face embeddings into Gaussians by training a post-hoc network to estimate the covariances. Their objective maximizes the mutual likelihood of same-class embeddings. Scott et al. [52] and Oh et al. [41] propose similarity-based losses that are the stochastic analogs of the prototypical network [54] and pairwise contrastive loss [20], respectively. Neither loss shows promise with high-dimensional embeddings. The loss from [52] struggles to compete with a standard prototypical network in 64D, and [41] omits any results with embeddings larger than 3D. Gaussian distributions suffer from the curse of dimensionality [11]; one possible explanation for the inferior performance compared to deterministic alternatives.
|
| 223 |
+
|
| 224 |
+
The work most closely related to ours is Kornblith et al. [27], which compares STANDARD, COSINE, and an assortment of alternatives including mean-squared error, sigmoid cross-entropy, and various regularizers applied to STANDARD. In contrast, we focus on multiple variants of spherical losses and comparing geometries. Their findings are compatible with ours.
|
| 225 |
+
|
| 226 |
+
# 5. Conclusions
|
| 227 |
+
|
| 228 |
+
In this work, we perform a systematic comparison of classification losses that span three embedding geometries—Euclidean, hyperbolic, and spherical—and attempt to reconcile the discrepancies in past work regarding their performance. Our investigations have led to a stochastic spherical classifier where embeddings and class weight vectors are von Mises-Fisher random variables. Our proposed loss is on par with other classification variants and also produces consistently reduced out-of-the-box calibration error. Our loss encodes instance ambiguity using the concentration parameter of the vMF distribution.
|
| 229 |
+
|
| 230 |
+
Consistent with the no-free-lunch theorem [73], we find there is no one loss to rule them all. Scanning arXiv, this conclusion is uncommon, and it is even rarer in published work. The performance jump claimed for novel losses often vanishes with rigorous, systematic comparisons in controlled settings—in settings where the network architecture is identical, hyperparameters are optimized with equal vigor, regularization and data augmentation is matched, and a held-out validation set is used to choose hyperparameters. Musgrave et al. [39] reach a similar conclusion: the gap be
|
| 231 |
+
|
| 232 |
+
tween various similarity-based losses, as well as the spherical classification losses (COSINE and ARCFACE) was much smaller in magnitude than was claimed in the original papers proposing them. We are hopeful that future work on supervised loss functions will also prioritize rigorous experiments and step away from the compulsion to show unqualified improvements over state of the art.
|
| 233 |
+
|
| 234 |
+
Pedantics aside, we are able to glean some positive messages by systematically comparing performance across geometries and across different classification paradigms. We focus on specific recommendations in the remainder of the paper that address trade-offs among the embedding geometries.
|
| 235 |
+
|
| 236 |
+
Accuracy. Many losses are designed primarily with accuracy in mind. Across both fixed- and open-set tasks, we find that losses operating on a spherical geometry perform best. Our results support the following ranking of losses: STANDARD $\leq$ HYPERBOLIC $\leq$ {COSINE, ARCFACE, VMF}. Additionally, our results corroborate two conclusions from Chen et al. [8]. First, smaller intra-class variance yields better generalization, and second, STANDARD focuses too much on separating classes by increasing embedding norms rather than reducing angular variance (e.g., Figure 4). Although the best of the spherical losses appears to be dataset dependent, the guidance to focus on spherical losses and perform empirical comparisons is not business as usual for practitioners, who treat STANDARD as the go-to loss. For practitioners using models with non-spherical geometries (STANDARD and HYPERBOLIC), we can still provide the guidance to use cosine distance at test time—discarding the embedding magnitude—which seems to reliably lead to improved retrieval performance.
|
| 237 |
+
|
| 238 |
+
An aspect of accuracy we do not consider, however, is the performance of downstream target tasks where a classification loss is used to pre-train weights. Kornblith et al. [27] discover that better class separation on the pre-training task can lead to worse transfer performance, as improved discriminability implies task specialization (i.e., throwing away inter-class variance necessary to relate classes). Spherical losses perform well on fixed-set tasks as well as on open-set tasks where the novel test classes are drawn from a distribution very similar to that of the training distribution, but transfer learning is typically a setting where STANDARD has been shown to be superior. We believe that it would be useful in future research to distinguish between near- and far-transfer, as doing so may yield distinct conclusions.
|
| 239 |
+
|
| 240 |
+
Calibration. In sensitive domains, deployed machine learning systems must produce trustworthy predictions, which requires that model confidence scores match model accuracy (i.e., they must be well calibrated). To our knowl-
|
| 241 |
+
|
| 242 |
+
edge, we are the first to rigorously examine the effect of embedding geometry on calibration performance. Our findings indicate that when a validation set is available for temperature scaling, STANDARD consistently produces predictions with the lowest calibration error, but STANDARD generally underperforms in accuracy. Additionally, there does not exist a significant gap in out-of-the-box calibration performance across previously proposed losses from the three geometries. However, our novel vMF loss achieves superior calibration while maintaining state-of-the-art classification performance.
|
| 243 |
+
|
| 244 |
+
Future Work. We are exploring several directions for future work. First, [13] introduces a novel probability distribution—the Power Spherical distribution—with support on the surface of the hypersphere. They claim it has several key improvements over vMF distributions including a reparameterization trick that does not require rejection sampling, improved stability in high dimensions and for large values of the concentration parameter, and no dependence on Bessel functions. A stochastic classifier based on the Power Spherical distribution would likely improve the computational efficiency as well as the optimization procedure, particularly for high-dimensional embedding spaces. Second, we note that our formulation of vMF is a special case of a class of objectives based on the deep variational information bottleneck [1]: $I(Z,Y) - \gamma I(Z,X)$ , where $\gamma = 0$ and the classification weights are also stochastic. Our objective thus lacks a regularizer attempting to compress the amount of input information contained in the embedding. Adding this regularization term may lead to improved performance and robustness. Third, all of the experimental datasets are approximately balanced and without label noise. Due to the stochasticity of the classification weights, vMF seems likely to benefit in supervised settings with long-tailed distributions or heteroscedastic label noise [10].
|
| 245 |
+
|
| 246 |
+
It is unfortunate that the field of supervised representation learning has become so vast that researchers tend to specialize in a particular learning paradigm (e.g., fixed-set classification, few-shot learning, transfer learning, deep metric-learning) or domain (e.g., face verification, person re-identification, image classification). As a result, losses are often pigeonholed to one paradigm or domain. The objective of this work is to lay out the space of losses in terms of embedding geometry and systematically survey losses that are not typically compared to one another. One surprising and important result in this survey is the strength of spherical losses, and the resulting dissociation of embedding norm and output confidence.
|
| 247 |
+
|
| 248 |
+
# References
|
| 249 |
+
|
| 250 |
+
[1] Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. Deep Variational Information Bottleneck. In International Conference on Learning Representations, 2017. 8
|
| 251 |
+
[2] Kelsey R. Allen, Evan Shelhamer, Hanul Shin, and Joshua B. Tenenbaum. Infinite Mixture Prototypes for Few-Shot Learning. In International Conference on Machine Learning, 2019. 1
|
| 252 |
+
[3] Arindam Banerjee, Inderjit S. Dhillon, Joydeep Ghosh, and Suvrit Sra. Clustering on the Unit Hypersphere Using von Mises-Fisher Distributions. Journal of Machine Learning Research, 6, 2005. 7
|
| 253 |
+
[4] Malik Boudiaf, Jérôme Rony, Imtiaz Masud Ziko, Eric Granger, Marco Pedersoli, Pablo Piantanida, and Ismail Ben Ayed. A Unifying Mutual Information View of Metric Learning: Cross-Entropy vs. Pairwise Losses. In European Conference on Computer Vision, 2020. 1, 6, 14
|
| 254 |
+
[5] John S. Bridle. Probabilistic Interpretation of Feedforward Classification Network Outputs, with Relationships to Statistical Pattern Recognition. In Neurocomputing. Springer, 1990. 1, 2
|
| 255 |
+
[6] Fatih Cakir, Kun He, Xide Xia, Brian Kulis, and Stan Sclaroff. Deep Metric Learning to Rank. In IEEE Conference on Computer Vision and Pattern Recognition, 2019. 1
|
| 256 |
+
[7] Jie Chang, Zhonghao Lan, Changmao Cheng, and Yichen Wei. Data Uncertainty Learning in Face Recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2020. 1, 7
|
| 257 |
+
[8] Beidi Chen, Weiyang Liu, Zhiding Yu, Jan Kautz, Anshumali Shrivastava, Animesh Garg, and Anima Anandkumar. Angular Visual Hardness. In International Conference on Machine Learning, 2020. 8
|
| 258 |
+
[9] Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a Similarity Metric Discriminatively, with Application to Face Verification. In IEEE Conference on Computer Vision and Pattern Recognition, 2005. 1
|
| 259 |
+
[10] Mark Collier, Basil Mustafa, Efi Kokiopoulou, Rodolphe Jenatton, and Jesse Berent. A Simple Probabilistic Method for Deep Classification under Input-Dependent Label Noise. arXiv e-prints 2003.06778 cs.LG, 2020. 1, 7, 8
|
| 260 |
+
[11] Tim R. Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, and Jakub M. Tomczak. Hyperspherical Variational Auto-Encoders. Conference on Uncertainty in Artificial Intelligence, 2018. 4, 7
|
| 261 |
+
[12] Alexandre de Brébisson and Pascal Vincent. An Exploration of Softmax Alternatives Belonging to the Spherical Loss Family. In International Conference on Learning Representations, 2016. 1
|
| 262 |
+
[13] Nicola De Cao and Wilker Aziz. The Power Spherical Distribution. In International Conference on Machine Learning, 2020. 8
|
| 263 |
+
[14] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. ArcFace: Additive Angular Margin Loss for Deep Face Recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2019. 1, 3, 5, 6
|
| 264 |
+
[15] Stanislav Fort. Gaussian Prototypical Networks for Few-Shot Learning on Omniglot. NeurIPS Bayesian Deep Learning Workshop, 2017. 1
|
| 265 |
+
|
| 266 |
+
[16] Octavian Ganea, Gary Becigneul, and Thomas Hofmann. Hyperbolic Neural Networks. In Advances in Neural Information Processing Systems 31, 2018. 1, 2
|
| 267 |
+
[17] Jacob Goldberger, Geoffrey E. Hinton, Sam Roweis, and Russ R. Salakhutdinov. Neighbourhood Components Analysis. In Advances in Neural Information Processing Systems 17, 2004. 1
|
| 268 |
+
[18] Siddharth Gopal and Yiming Yang. von Mises-Fisher Clustering Models. In International Conference on Machine Learning, 2014. 7
|
| 269 |
+
[19] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On Calibration of Modern Neural Networks. In International Conference on Machine Learning, 2017. 5
|
| 270 |
+
[20] Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality Reduction by Learning an Invariant Mapping. In IEEE Conference on Computer Vision and Pattern Recognition, 2006. 1, 7
|
| 271 |
+
[21] Md. Abul Hasnat, Julien Bohné, Jonathan Milgram, Stéphane Gentic, and Liming Chen. von Mises-Fisher Mixture Model-based Deep learning: Application to Face Verification. arXiv e-prints 1706.04264 cs.CV, 2017. 7
|
| 272 |
+
[22] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2016. 14
|
| 273 |
+
[23] Junlin Hu, Jiwen Lu, and Yap Peng Tan. Discriminative Deep Metric Learning for Face Verification in the Wild. In IEEE Conference on Computer Vision and Pattern Recognition, 2014. 1
|
| 274 |
+
[24] Lukasz Kaiser, Ofir Nachum, Aurko Roy, and Samy Bengio. Learning to Remember Rare Events. In International Conference on Learning Representations, 2017. 1
|
| 275 |
+
[25] Valentin Khrulkov, Leyla Mirvakhabova, Evgeniya Ustinova, Ivan Oseledets, and Victor Lempitsky. Hyperbolic Image Embeddings. In IEEE Conference on Computer Vision and Pattern Recognition, 2020. 1, 2
|
| 276 |
+
[26] Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese Neural Networks for One-Shot Image Recognition. ICML Deep Learning Workshop, 2015. 1
|
| 277 |
+
[27] Simon Kornblith, Honglak Lee, Ting Chen, and Mohammad Norouzi. What's in a Loss Function for Image Classification? arXiv e-prints 2010.16402 cs.CV, 2020. 7, 8
|
| 278 |
+
[28] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3D Object Representations for Fine-Grained Categorization. In IEEE International Conference on Computer Vision Workshops, 2013. 4, 5
|
| 279 |
+
[29] Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. 2009. 5
|
| 280 |
+
[30] Ananya Kumar, Percy Liang, and Tengyu Ma. Verified Uncertainty Calibration. In Advances in Neural Information Processing Systems 32, 2019. 5
|
| 281 |
+
[31] Sachin Kumar and Yulia Tsvetkov. von Mises-Fisher Loss for Training Sequence to Sequence Models with Continuous Outputs. In International Conference on Learning Representations, 2019. 4, 12
|
| 282 |
+
[32] Marc Law, Renjie Liao, Jake Snell, and Richard Zemel. Lorentzian Distance Learning for Hyperbolic Representations. In International Conference on Machine Learning, 2019. 1
|
| 283 |
+
|
| 284 |
+
[33] Yann LeCun and Corinna Cortes. MNIST Handwritten Digit Database, 2010. 3, 5
|
| 285 |
+
[34] Wei Li, Rui Zhao, Tong Xiao, and Xiaogang Wang. DeepReID: Deep Filter Pairing Neural Network for Person Re-Identification. In IEEE Conference on Computer Vision and Pattern Recognition, 2014. 1
|
| 286 |
+
[35] Jingtuo Liu, Yafeng Deng, Tao Bai, Zhengping Wei, and Chang Huang. Targeting Ultimate Accuracy: Face Recognition via Deep Embedding. arXiv e-prints 1506.07310 cs.CV, 2015. 1
|
| 287 |
+
[36] Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song. SphereFace: Deep Hypersphere Embedding for Face Recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2017. 1, 3
|
| 288 |
+
[37] Pascal Mettes, Elise van der Pol, and Cees G. M. Snoek. Hyperspherical Prototype Networks. In Advances in Neural Information Processing Systems 32, 2019. 1
|
| 289 |
+
[38] Yair Movshovitz-Attias, Alexander Toshev, Thomas K. Leung, Sergey Ioffe, and Saurabh Singh. No Fuss Distance Metric Learning Using Proxies. In IEEE International Conference on Computer Vision, 2017. 1
|
| 290 |
+
[39] Kevin Musgrave, Serge Belongie, and Ser Nam Lim. A Metric Learning Reality Check. In European Conference on Computer Vision, 2020. 1, 2, 6, 7
|
| 291 |
+
[40] Maximilian Nickel and Douwe Kiela. Poincaré Embeddings for Learning Hierarchical Representations. In Advances in Neural Information Processing Systems 30, 2017. 1
|
| 292 |
+
[41] Seong Joon Oh, Kevin Murphy, Jiyan Pan, Joseph Roth, Florian Schroff, and Andrew Gallagher. Modeling Uncertainty with Hedged Instance Embedding. In International Conference on Learning Representations, 2019. 1, 2, 6, 7, 14
|
| 293 |
+
[42] Connor J. Parde, Carlos Castillo, Matthew Q. Hill, Y. Ivette Colon, Swami Sankaranarayanan, Jun-Cheng Chen, and Alice J. O'Toole. Deep Convolutional Neural Network Features and the Original Image. arXiv e-prints 1611.01751 cs.CV, 2017.3
|
| 294 |
+
[43] Junyoung Park, Subin Yi, Yongseok Choi, Dong-Yeon Cho, and Jiwon Kim. Discriminative Few-Shot Learning Based on Directional Statistics. arXiv e-prints 1906.01819 cs.LG, 2019.7
|
| 295 |
+
[44] Qi Qian, Lei Shang, Baigui Sun, Juhua Hu, Tacoma Tacoma, Hao Li, and Rong Jin. SoftTriple Loss: Deep Metric Learning Without Triplet Sampling. In IEEE International Conference on Computer Vision, 2019. 1
|
| 296 |
+
[45] Rajeev Ranjan, Ankan Bansal, Hongyu Xu, Swami Sankaranarayanan, Jun-Cheng Chen, Carlos D. Castillo, and Rama Chellappa. Crystal Loss and Quality Pooling for Unconstrained Face Verification and Recognition. arXiv e-prints 1804.01159 cs.CV, 2019. 1, 3, 6
|
| 297 |
+
[46] Rajeev Ranjan, Carlos D. Castillo, and Rama Chellappa. L2-Constrained Softmax Loss for Discriminative Face Verification. arXiv e-prints 1703.09507 cs.CV, 2017. 1, 3, 6
|
| 298 |
+
[47] Karl Ridgeway and Michael C. Mozer. Learning Deep Disentangled Embeddings with the F-Statistic Loss. In Advances in Neural Information Processing Systems 31, 2018. 1
|
| 299 |
+
[48] Oren Rippel, Manohar Paluri, Piotr Dollar, and Lubomir Bourdev. Metric Learning with Adaptive Density Discrimination. In International Conference on Learning Representations, 2013.
|
| 300 |
+
|
| 301 |
+
tations, 2016. 1
|
| 302 |
+
[49] Rebecca Roelofs, Nicholas Cain, Jonathon Shlens, and Michael C. Mozer. Mitigating Bias in Calibration Error Estimation. arXiv e-prints 2012.08668 cs.LG, 2021. 5
|
| 303 |
+
[50] Diego Ruiz-Antolín and Javier Segura. A New Type of Sharp Bounds for Ratios of Modified Bessel Functions. Journal of Mathematical Analysis and Applications, 443(2):1232-1246, 2016. 4, 12
|
| 304 |
+
[51] Florian Schroff, Dmitry Kalenichenko, and James Philbin. FaceNet: A Unified Embedding for Face Recognition and Clustering. In IEEE Conference on Computer Vision and Pattern Recognition, 2015. 1
|
| 305 |
+
[52] Tyler R. Scott, Karl Ridgeway, and Michael C. Mozer. Stochastic Prototype Embeddings. arXiv e-prints 1909.11702 stat.ML, 2019.2, 6, 7
|
| 306 |
+
[53] Yichun Shi and Anil K. Jain. Probabilistic Face Embeddings. In IEEE International Conference on Computer Vision, 2019. 7
|
| 307 |
+
[54] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical Networks for Few-Shot Learning. In Advances in Neural Information Processing Systems 30, 2017. 1, 2, 7
|
| 308 |
+
[55] Kihyuk Sohn. Improved Deep Metric Learning with Multi-Class N-Pair Loss Objective. In Advances in Neural Information Processing Systems 29, 2016. 1
|
| 309 |
+
[56] Hyun Oh Song, Stefanie Jegelka, Vivek Rathod, and Kevin Murphy. Deep Metric Learning via Facility Location. In IEEE Conference on Computer Vision and Pattern Recognition, 2017. 1
|
| 310 |
+
[57] Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep Metric Learning via Lifted Structured Feature Embedding. In IEEE Conference on Computer Vision and Pattern Recognition, 2016. 1, 5
|
| 311 |
+
[58] Yifan Sun, Changmao Cheng, Yuhan Zhang, Chi Zhang, Liang Zheng, Zhongdao Wang, and Yichen Wei. Circle Loss: A Unified Perspective of Pair Similarity Optimization. In IEEE Conference on Computer Vision and Pattern Recognition, 2020. 1
|
| 312 |
+
[59] Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H. S. Torr, and Timothy M. Hospedales. Learning to Compare: Relation Network for Few-Shot Learning. In IEEE Conference on Computer Vision and Pattern Recognition, 2018. 1
|
| 313 |
+
[60] Eu Wern Teh, Terrance DeVries, and Graham W. Taylor. ProxyNCA++: Revisiting and Revitalizing Proxy Neighborhood Component Analysis. In European Conference on Computer Vision, 2020. 1
|
| 314 |
+
[61] Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B. Tenenbaum, and Phillip Isola. Rethinking Few-Shot Image Classification: A Good Embedding is All You Need? arXiv eprints 2003.11539 cs.CV, 2020. 1, 2
|
| 315 |
+
[62] Eleni Triantafillou, Richard Zemel, and Raquel Urtasun. Few-Shot Learning Through an Information Retrieval Lens. In Advances in Neural Information Processing Systems 30, 2017. 1
|
| 316 |
+
[63] Evgeniya Ustinova and Victor Lempitsky. Learning Deep Embeddings with Histogram Loss. In Advances in Neural Information Processing Systems 29, 2016. 1
|
| 317 |
+
[64] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching Networks for
|
| 318 |
+
|
| 319 |
+
One Shot Learning. In Advances in Neural Information Processing Systems 29, 2016. 1
|
| 320 |
+
[65] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical report, California Institute of Technology, 2011. 5
|
| 321 |
+
[66] Feng Wang, Jian Cheng, Weiyang Liu, and Hajun Liu. Additive Margin Softmax for Face Verification. IEEE Signal Processing Letters, 2018. 1, 3, 6
|
| 322 |
+
[67] Feng Wang, Xiang Xiang, Jian Cheng, and Alan L. Yuille. NormFace: L2 Hypersphere Embedding for Face Verification. In ACM Multimedia Conference, 2017. 1, 3, 6
|
| 323 |
+
[68] Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. CosFace: Large Margin Cosine Loss for Deep Face Recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2018. 1, 3, 6
|
| 324 |
+
[69] Jian Wang, Feng Zhou, Shilei Wen, Xiao Liu, and Yuanqing Lin. Deep Metric Learning with Angular Loss. In IEEE International Conference on Computer Vision, 2017. 1
|
| 325 |
+
[70] Xun Wang, Xintong Han, Weilin Huang, Dengke Dong, and Matthew R. Scott. Multi-Similarity Loss with General Pair Weighting for Deep Metric Learning. In IEEE Conference on Computer Vision and Pattern Recognition, 2019. 1
|
| 326 |
+
[71] Xinshao Wang, Yang Hua, Elyor Kodirov, Guosheng Hu, Romain Garnier, and Neil M. Robertson. Ranked List Loss for Deep Metric Learning. In IEEE Conference on Computer Vision and Pattern Recognition, 2019. 1
|
| 327 |
+
[72] Kilian Q. Weinberger and Lawrence K Saul. Distance Metric Learning for Large Margin Nearest Neighbor Classification. Journal of Machine Learning Research, 10, 2009. 1
|
| 328 |
+
[73] David H. Wolpert and William G. Macready. No Free Lunch Theorems for Optimization. IEEE Transactions on Evolutionary Computation, 1997. 7
|
| 329 |
+
[74] Chao-Yuan Wu, R. Manmatha, Alexander J. Smola, and Philipp Krahenbuhl. Sampling Matters in Deep Embedding Learning. In IEEE International Conference on Computer Vision, 2017. 1
|
| 330 |
+
[75] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv e-prints 1708.07747 cs.LG, 2017. 5
|
| 331 |
+
[76] Tongtong Yuan, Weihong Deng, Jian Tang, Yinan Tang, and Binghui Chen. Signal-to-Noise Ratio: A Robust Distance Metric for Deep Metric Learning. In IEEE Conference on Computer Vision and Pattern Recognition, 2019. 1
|
| 332 |
+
[77] Andrew Zhai and Hao Yu Wu. Classification is a Strong Baseline for Deep Metric Learning. In British Machine Vision Conference, 2019. 1, 2, 6
|
| 333 |
+
[78] Li Zhang, Tao Xiang, and Shaogang Gong. Learning a Deep Embedding Model for Zero-Shot Learning. In IEEE Conference on Computer Vision and Pattern Recognition, 2017. 1
|
| 334 |
+
[79] Xuefei Zhe, Shifeng Chen, and Hong Yan. Directional Statistics-Based Deep Metric Learning for Image Classification and Retrieval. arXiv e-prints 1802.09662 cs.CV, 2018. 7
|
vonmisesfisherlossanexplorationofembeddinggeometriesforsupervisedlearning/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b546f846c052aa447f3140d9611b3c2a3d740790bf1753909432ae51cf346025
|
| 3 |
+
size 445991
|
vonmisesfisherlossanexplorationofembeddinggeometriesforsupervisedlearning/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:41c728e3ad5b74b762cedd2a27e98d70cd31486f8277e011f6d6214c5f3354f0
|
| 3 |
+
size 420360
|
whenpigsflycontextualreasoninginsyntheticandnaturalscenes/526d1cec-6da5-4743-96b9-a19fc9e7a96b_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f7e97aaf7a921c82cfb783f38e8b3a6dd7de5469865092d2a89e251ccb9bddde
|
| 3 |
+
size 74786
|
whenpigsflycontextualreasoninginsyntheticandnaturalscenes/526d1cec-6da5-4743-96b9-a19fc9e7a96b_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9406743a016f669a2d641250bda2975e37ea87ccf3599b087248c6b59b21fc6c
|
| 3 |
+
size 89884
|
whenpigsflycontextualreasoninginsyntheticandnaturalscenes/526d1cec-6da5-4743-96b9-a19fc9e7a96b_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0cc533ab2ac6b274e2698a6059a82521182819b4c21645eb58cff34388349ed3
|
| 3 |
+
size 3277023
|
whenpigsflycontextualreasoninginsyntheticandnaturalscenes/full.md
ADDED
|
@@ -0,0 +1,296 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# When Pigs Fly: Contextual Reasoning in Synthetic and Natural Scenes
|
| 2 |
+
|
| 3 |
+
Philipp Bomatter $^{1,\ast}$ , Mengmi Zhang $^{2,3,\ast}$ , Dimitar Karev $^{4}$ , Spandan Madan $^{3,5}$ , Claire Tseng $^{4}$ , and Gabriel Kreiman $^{2,3}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ ETH Zürich
|
| 6 |
+
|
| 7 |
+
2Children's Hospital, Harvard Medical School
|
| 8 |
+
|
| 9 |
+
<sup>3</sup>Center for Brains, Minds and Machines
|
| 10 |
+
|
| 11 |
+
$^{4}$ Harvard College, Harvard University
|
| 12 |
+
|
| 13 |
+
$^{5}$ School of Engineering and Applied Sciences, Harvard University *Equal contribution
|
| 14 |
+
|
| 15 |
+
Address correspondence to gabriel.kreiman@tch.harvard.edu
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
Figure 1: Images under normal context and out-of-context conditions were generated in the VirtualHome environment [27] using the Unity 3D simulation engine [19]. The same target object (a mug, red bounding box) is shown in different context conditions: normal context (a, b) and out-of-context conditions including gravity ((c), target object is floating in the air), change in object co-occurrence statistics (d), combination of both gravity and object co-occurrence statistics (e), enlarged object size (f), and no context with uniform grey pixels as background (g).
|
| 19 |
+
|
| 20 |
+

|
| 21 |
+
|
| 22 |
+

|
| 23 |
+
|
| 24 |
+

|
| 25 |
+
|
| 26 |
+
# Abstract
|
| 27 |
+
|
| 28 |
+
Context is of fundamental importance to both human and machine vision; e.g., an object in the air is more likely to be an airplane than a pig. The rich notion of context incorporates several aspects including physics rules, statistical co-occurrences, and relative object sizes, among others. While previous work has focused on crowd-sourced out-of-context photographs from the web to study scene context, controlling the nature and extent of contextual violations has been a daunting task. Here we introduce a diverse, synthetic Out-of-Context Dataset (OCD) with fine-grained control over scene context. By leveraging a 3D simulation engine, we systematically control the gravity, object co-occurrences and relative sizes across 36 object categories in a virtual household environment. We conducted a series of
|
| 29 |
+
|
| 30 |
+
experiments to gain insights into the impact of contextual cues on both human and machine vision using OCD. We conducted psychophysics experiments to establish a human benchmark for out-of-context recognition, and then compared it with state-of-the-art computer vision models to quantify the gap between the two. We propose a context-aware recognition transformer model, fusing object and contextual information via multi-head attention. Our model captures useful information for contextual reasoning, enabling human-level performance and better robustness in out-of-context conditions compared to baseline models across OCD and other out-of-context datasets. All source code and data are publicly available at https://github.com/kreimanlab/WhenPigsFlyContext
|
| 31 |
+
|
| 32 |
+
# 1. Introduction
|
| 33 |
+
|
| 34 |
+
A coffee mug is usually a small object (Fig.1a), which does not fly on its own (Fig.1c) and can often be found on a table (Fig.1a) but not on a chair (Fig.1d). Such contextual cues have a pronounced impact on the object recognition capabilities of both humans [39], and computer vision models [34, 7, 25, 22]. Neural networks learn co-occurrence statistics between an object's appearance and its label, but also between the object's context and its label [11, 30, 2]. Therefore, it is not surprising that recognition models fail to recognize objects in unfamiliar contexts [29]. Despite the fundamental role of context in visual recognition, it remains unclear what contextual cues should be integrated with object information and how.
|
| 35 |
+
|
| 36 |
+
Two challenges have hindered progress in the study of the role of contextual cues: (1) context has usually been treated as a monolithic concept and (2) large-scale, internet-scraped datasets like ImageNet [9] or COCO [21] are highly uncontrolled. To address these challenges, we present a methodology to systematically study the effects of an object's context on recognition by leveraging a Unity-based 3D simulation engine for image generation [19], and manipulating 3D objects in a virtual home environment [27]. The ability to rigorously control every aspect of the scene enables us to systematically violate contextual rules and assess their impact on recognition. We focus on three fundamental aspects of context: (1) gravity - objects without physical support, (2) object co-occurrences - unlikely object combinations, and (3) relative size - changes to the size of target objects relative to the background. As a critical benchmark, we conducted psychophysics experiments to measure human performance and compare it with state-of-the-art computer vision models.
|
| 37 |
+
|
| 38 |
+
We propose a new context-aware architecture, which can incorporate object and contextual information to achieve higher object recognition accuracy given proper context and robustness to out-of-context situations. Our Context-aware Recognition Transformer Network (CRTNet) uses two separate streams to process the object and its context independently before integrating them via multi-head attention in transformer decoder modules. Across multiple datasets, the CRTNet model surpasses other state-of-the-art computational models in normal context and classifies objects robustly despite large contextual variations, much like humans do.
|
| 39 |
+
|
| 40 |
+
Our contributions in this paper are three-fold. Firstly, we introduce a challenging new dataset for in- and out-of-context object recognition that allows fine-grained control over context violations including gravity, object co-occurrences and relative object sizes (out-of-context dataset, OCD). Secondly, we conduct psychophysics experiments to establish a human benchmark for in
|
| 41 |
+
|
| 42 |
+
and out-of-context recognition and compare it with state-of-the-art computer vision models. Finally, we propose a new context-aware architecture for object recognition, which combines object and scene information to reason about context and generalizes well to out-of-context images. We release the entire dataset, including our tools for the generation of additional images and the source code for CRTNet at https://github.com/kreimanlab/WhenPigsFlyContext.
|
| 43 |
+
|
| 44 |
+
# 2. Related Works
|
| 45 |
+
|
| 46 |
+
Out-of-context datasets: Notable works on out-of-context datasets include the UnRel dataset [26] and the Cut-and-paste dataset presented in [39]. While UnRel is a remarkable collection of out-of-context natural images, it is limited in size and diversity. A drawback of cutting-and-pasting [14] is the introduction of artifacts such as unnatural lighting, object boundaries, sizes and positions. Neither of those datasets allow systematic analysis of individual properties of context. 3D simulation engines enable easily synthesizing many images and systematically investigating the violation of contextual cues. It is challenging to achieve these goals with real-world photographs. Moreover, these simulation engines enable precise control of contextual parameters, changing cues one at a time in a systematic and quantifiable manner.
|
| 47 |
+
|
| 48 |
+
Out-of-context object recognition: In previous work, context has mostly been studied as a monolithic property in the form of the target object's background. Previous work included testing the generalization to new backgrounds [2] and incongruent backgrounds [39], exploring the impact of foreground-background relationships on data augmentation [13], and replacing image sub-regions by another sub-image, i.e. object transplanting [29]. In this paper, we evaluate different properties of contextual cues (e.g. gravity) in a quantitative, controlled, and systematic manner.
|
| 49 |
+
|
| 50 |
+
3D simulation engines and computer vision: Recent studies have demonstrated the success of using 3D virtual environments for tasks such as object recognition with simple and uniform backgrounds [3], routine program synthesis [27], 3D animal pose estimation [24], and studying the generalization capabilities of CNNs [23, 16]. However, to the best of our knowledge, none of these studies have tackled the challenging problem of how to integrate contextual cues.
|
| 51 |
+
|
| 52 |
+
Models for context-aware object recognition: To tackle the problem of context-aware object recognition, researchers have proposed classical approaches, e.g. Conditional Random Field (CRF) [15, 38, 20, 6], and graph-based methods [32, 37, 33, 7]. Recent studies have extended this line of work to deep graph neural networks [17, 8, 10, 1]. Breaking away from these previous
|
| 53 |
+
|
| 54 |
+
works where graph optimization is performed globally for contextual reasoning in object recognition, our model has a two-stream architecture which separately processes visual information on both target objects and context, and then integrates them with multi-head attention in stacks of transformer decoder layers. In contrast to other vision transformer models in object recognition [12] and detection [5], CRTNet performs in-context recognition tasks given the target object location.
|
| 55 |
+
|
| 56 |
+
# 3. Context-aware Recognition Transformer
|
| 57 |
+
|
| 58 |
+
# 3.1. Overview
|
| 59 |
+
|
| 60 |
+
We propose the Context-aware Recognition Transformer Network (CRTNet, Figure 2). CRTNet is presented with an image with multiple objects and a bounding box to indicate the target object location. The model has three main elements: First, CRTNet uses a stack of transformer decoder modules with multi-head attention to hierarchically reason about context and integrate contextual cues with object information. Second, a confidence-weighting mechanism improves the model's robustness and gives it the flexibility to select what information to rely on for recognition. Third, we curated the training methodology with gradient detachment to prioritize important model components and ensure efficient training of the entire architecture.
|
| 61 |
+
|
| 62 |
+
Inspired by the eccentricity dependence of human vision, CRTNet has one stream that processes only the target object $(I_{t},224\times 224)$ , and a second stream devoted to the periphery $(I_c,224\times 224)$ . $I_{t}$ is obtained by cropping the input image to the bounding box whereas $I_{c}$ covers the entire contextual area of the image. $I_{c}$ and $I_{t}$ are resized to the same dimensions. Thus, the target object's resolution is higher in $I_{t}$ . The two streams are encoded through separate 2D-CNNs. After the encoding stage, CRTNet tokenizes the feature maps of $I_{t}$ and $I_{c}$ , integrates object and context information via hierarchical reasoning through a stack of transformer decoder layers, and predicts class label probabilities $y_{t,c}$ within $C$ classes.
|
| 63 |
+
|
| 64 |
+
A model that always relies on context can make mistakes under unusual context conditions. To increase robustness, CRTNet makes a second prediction $y_{t}$ , based on target object information alone, estimates the confidence $p$ of this prediction, and computes a confidence-weighted average of $y_{t}$ and $y_{t,c}$ to get the final prediction $y_{p}$ . If the model makes a confident prediction based on the target object alone, this decision overrules the context reasoning stage.
|
| 65 |
+
|
| 66 |
+
# 3.2. Convolutional Feature Extraction
|
| 67 |
+
|
| 68 |
+
CRTNet takes $I_{c}$ and $I_{t}$ as inputs and uses two 2D-CNNs, $E_{c}(\cdot)$ and $E_{t}(\cdot)$ , to extract context and target feature maps $a_{c}$ and $a_{t}$ , respectively, where $E_{c}(\cdot)$ and $E_{t}(\cdot)$
|
| 69 |
+
|
| 70 |
+
are parameterized by $\theta_{E_c}$ and $\theta_{E_t}$ . We use the DenseNet architecture [18] with weights pre-trained on ImageNet [9] and fine-tune it. Assuming that different features in $I_{c}$ and $I_{t}$ are useful for recognition, we do not enforce sharing of the parameters $\theta_{E_c}$ and $\theta_{E_t}$ . We demonstrate the advantage of non-shared parameters in the ablation study (Sec. 5.5). To allow CRTNet to focus on specific parts of the image and select features at those locations, we preserve the spatial organization of features and define $a_{c}$ and $a_{t}$ as the output feature maps from the last convolution layer of DenseNet. Both $a_{c}$ and $a_{t}$ are of size $D\times W\times H = 1,664\times 7\times 7$ where $D$ , $W$ and $H$ denote the number of channels, width and height of the feature maps respectively.
|
| 71 |
+
|
| 72 |
+
# 3.3. Tokenization and Positional Encoding
|
| 73 |
+
|
| 74 |
+
We tokenize the context feature map $a_{c}$ by splitting it into patches based on locations, following [12]. Each context token corresponds to a feature vector $\mathbf{a}_{\mathbf{c}}^{\mathrm{i}}$ of dimension $D$ at location $i$ where $i \in \{1,..,L = H \times W\}$ . To compute target token $T_{t}$ , CRTNet aggregates the target feature map $a_{t}$ via average pooling:
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
T _ {t} = \frac {1}{L} \sum_ {i = 1, \dots , L} \mathbf {a} _ {\mathbf {t}} ^ {i} \tag {1}
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
To encode the spatial relations between the target token and the context tokens, as well as between different context tokens, we learn a positional embedding of size $D$ for each location $i$ and add it to the corresponding context token $\mathbf{a}_{\mathbf{c}}^{\mathrm{i}}$ . For the target token $T_{t}$ , we use the positional embedding corresponding to the location, within which the bounding box midpoint is contained. The positionally-encoded context and target tokens are denoted by $z_{c}$ and $z_{t}$ respectively.
|
| 81 |
+
|
| 82 |
+
# 3.4. Transformer Decoder
|
| 83 |
+
|
| 84 |
+
We follow the original transformer decoder [36], taking $z_{c}$ to compute keys and values, and $z_{t}$ to generate the queries in the transformer encoder-decoder multi-head attention layer. Since we only have a single target token, we omit the self-attention layer. In the experiments, we also tested CRTNet with self-attention enabled and we did not observe performance improvements. Our decoder layer consists of alternating layers of encoder-decoder attention (EDA) and multi-layer perceptron (MLP) blocks. Layernorm (LN) is applied after each residual connection. Dropout (DROP) is applied within each residual connection and MLP block. The MLP contains two layers with a ReLU non-linearity and DROP.
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
z _ {t, c} = \operatorname {L N} (\operatorname {D R O P} (\operatorname {E D A} (z _ {t}, z _ {c})) + z _ {t}) \tag {2}
|
| 88 |
+
$$
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
z _ {t, c} ^ {\prime} = \ln (\operatorname {D R O P} (\operatorname {M L P} (z _ {t, c})) + z _ {t, c}) \tag {3}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+

|
| 95 |
+
Figure 2: Architecture overview of the Context-aware Recognition Transformer Network (CRTNet). CRTNet consists of 3 main modules: feature extraction, integration of context and target information, and confidence-modulated classification. CRTNet takes the cropped target object $I_{t}$ and the entire context image $I_{c}$ as inputs and extracts their respective features. These feature maps are then tokenized and the information of the two streams is integrated over multiple transformer decoder layers. CRTNet also estimates a confidence score for recognizing the target object based on object features alone, which is used to modulate the contributions of $y_{t}$ and $y_{t,c}$ to the final prediction $y_{p}$ . The dashed lines in backward direction denote gradient flows during backpropagation. The two black crosses denote where the gradient updates stop. See Sec. 3 for details.
|
| 96 |
+
|
| 97 |
+
Our transformer decoder has a stack of $X = 6$ layers, indexed by $x$ . We repeat the operations in Eqs 2 and 3 for each transformer decoder layer by recursively assigning $z_{t,c}^{\prime}$ back to $z_{t}$ as input to the next transformer decoder layer. Each EDA layer integrates useful information from the context and the target object with 8-head selective attention. Based on accumulated information from all previous $x - 1$ layers, each EDA layer enables CRTNet to progressively reason about context by updating the attention map on $z_{c}$ over all $L$ locations. We provide visualization examples of attention maps along the hierarchy of the transformer decoder modules in Supp. Fig S1.
|
| 98 |
+
|
| 99 |
+
# 3.5. Confidence-modulated Recognition
|
| 100 |
+
|
| 101 |
+
The context classifier $G_{z}(\cdot)$ with parameters $\theta_{G_z}$ consists of a fully-connected layer and a softmax layer. It takes the feature embedding $z_{t,c}^{\prime}$ from the last transformer decoder layer and outputs the predicted class distribution vector: $y_{t,c} = G_{z}(z_{t,c}^{\prime})$ . Similarly, the target classifier $G_{t}(\cdot)$ , takes the feature map $a_{t}$ as input and outputs the predicted class distribution vector: $y_{t} = G_{t}(a_{t})$ .
|
| 102 |
+
|
| 103 |
+
Since neural networks are often fooled by incongruent context [39], we propose a confidence-modulated recognition mechanism balancing the predictions from $G_{t}(\cdot)$ and $G_{z}(\cdot)$ . The confidence estimator $U(\cdot)$ with parameters $\theta_{U}$ takes the target feature map $a_{t}$ as input and outputs a value $p$ indicating how confident CRTNet is about the prediction $y_{t}$ . $U(\cdot)$ is a feed-forward multi-layer perceptron network with a sigmoid function to normalize
|
| 104 |
+
|
| 105 |
+
the confidence score to [0, 1].
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
p = \frac {1}{1 + e ^ {- U (a _ {t})}} \tag {4}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
We use $p$ to compute a confidence-weighted average of $y_{t,c}$ and $y_{t}$ for the final predicted class distribution $y_{p}$ : $y_{p} = py_{t} + (1 - p)y_{t,c}$ . The higher the confidence $p$ , the more CRTNet relies on the target object itself, rather than on the integrated contextual information, for classification. We demonstrate the advantage of using $y_{p}$ rather than $y_{t,c}$ or $y_{t}$ as a final prediction in the ablation study (Sec. 5.5).
|
| 112 |
+
|
| 113 |
+
# 3.6. Training
|
| 114 |
+
|
| 115 |
+
CRTNet is trained end-to-end with three loss functions: (i) to train the confidence estimator $U(\cdot)$ , we use a cross-entropy loss with respect to the confidence-weighted prediction $y_{p}$ . This allows $U(\cdot)$ to learn to increase the confidence value $p$ when the prediction $y_{t}$ based on target object information alone is correct. (ii) To train $G_{t}(\cdot)$ , we use a cross-entropy loss with respect to $y_{t}$ . (iii) For the other components of CRTNet, including the transformer decoder modules and the classifier $G_{z}(\cdot)$ , we use a cross-entropy loss with respect to $y_{t,c}$ . Instead of training everything based on $y_{p}$ , the three loss functions together maintain strong learning signals for all parts in the architecture irrespective of the confidence value $p$ .
|
| 116 |
+
|
| 117 |
+
To facilitate learning for specific components in CRTNet, we also introduce gradient detachments during backpropagation (Fig. 2). Gradients flowing through both $U(\cdot)$ and $G_{t}(\cdot)$ are detached from $E_{t}(\cdot)$ to prevent them
|
| 118 |
+
|
| 119 |
+
from driving the target encoder to learn more discriminative features, which could impact the efficacy of the transformer modules and $G_{z}(\cdot)$ . We demonstrate the benefit of these design decisions in ablation studies (Sec. 5.5).
|
| 120 |
+
|
| 121 |
+
# 4. Experimental Details
|
| 122 |
+
|
| 123 |
+
# 4.1. Baselines
|
| 124 |
+
|
| 125 |
+
CATNet [39] is a context-aware two-stream object recognition model. It processes the visual features of a cropped target object and context in parallel, dynamically incorporates object and contextual information by constantly updating its attention over image locations, and sequentially reasons about the class label for the target object via a recurrent neural network.
|
| 126 |
+
|
| 127 |
+
Faster R-CNN [28] is an object detection algorithm. We adapted it to the context-aware object recognition task by replacing the region proposal network with the ground truth bounding box indicating the location of the target object.
|
| 128 |
+
|
| 129 |
+
DenseNet [18] is a 2D-CNN with dense connections that takes the cropped target object patch $I_{t}$ as input.
|
| 130 |
+
|
| 131 |
+
# 4.2. Datasets
|
| 132 |
+
|
| 133 |
+
# 4.2.1 Out-of-context Dataset (OCD)
|
| 134 |
+
|
| 135 |
+
Our out-of-context dataset (OCD) contains 36 object classes, with 15,773 test images of complex and rich scenes in 6 contextual conditions (described below). We leveraged the VirtualHome environment [27] developed in the Unity simulation engine to synthesize these images in indoor home environments within 7 apartments and 5 rooms per apartment. These rooms include furnished bedrooms, kitchens, study rooms, living rooms and bathrooms [27] (see Fig. 1 for examples). We extended VirtualHome with additional functionalities to manipulate object properties, such as materials and scales, and to place objects in out-of-context locations. The target object is always centered in the camera view; collision checking and camera ray casting are enabled to prevent object collisions and occlusions.
|
| 136 |
+
|
| 137 |
+
Normal Context and No Context: There are 2,309 images with normal context (Fig. 1b), and 2,309 images for the no-context condition (Fig. 1g). For the normal context condition, each target object is placed in its "typical" location, defined by the default settings of VirtualHome. We generate a corresponding no context image for every normal context image by replacing all the pixels surrounding the target object with either uniform grey pixels or salt and pepper noise.
|
| 138 |
+
|
| 139 |
+
Gravity: We generated 2,934 images where we move the target object along the vertical direction such that it is no longer supported (Fig. 1c). To avoid cases where objects are lifted so high that their surroundings change completely, we set the lifting offset to 0.25 meters.
|
| 140 |
+
|
| 141 |
+
Object Co-occurrences: To examine the importance of the statistics of object co-occurrences, four human subjects were asked to indicate the most likely rooms and locations for the target objects. We use the output of these responses to generate 1,453 images where we place the target objects on surfaces with lower co-occurrence probability, e.g. a microwave in the bathroom and Fig. 1d.
|
| 142 |
+
|
| 143 |
+
Object Co-occurrences + Gravity: We generated 910 images where the objects are both lifted and placed in unlikely locations. We chose walls, windows, and doorways of rooms where the target object is typically absent (Fig. 1e). We place target objects at half of the apartment's height.
|
| 144 |
+
|
| 145 |
+
Size: We created 5,858 images where we changed the target object's size to 2, 3, or 4 times its original size while keeping the remaining objects in the scene intact (Fig. 1f).
|
| 146 |
+
|
| 147 |
+
# 4.2.2 Real-world Out-of-context Datasets
|
| 148 |
+
|
| 149 |
+
The Cut-and-paste dataset [39] contains 2,259 out-of-context images spanning 55 object classes. These images are grouped into 16 conditions obtained through the combinations of 4 object sizes and 4 context conditions (normal, minimal, congruent, and incongruent) (Fig. 3b).
|
| 150 |
+
|
| 151 |
+
The UnRel [26] Dataset contains more than 1,000 images with unusual relations among objects spanning 100 object classes. The dataset was collected from the web based on triplet queries, such as "dog rides bike" (Fig. 3c).
|
| 152 |
+
|
| 153 |
+
# 4.3. Performance Evaluation
|
| 154 |
+
|
| 155 |
+
Evaluation of Computational Models: We trained the models on natural images from COCO-Stuff [4] using the annotations for object classes overlapping with those in the respective test set (16 overlapping classes between VirtualHome and COCO-Stuff, 55 overlapping classes between Cut-and-paste and COCO-Stuff and 33 overlapping classes between UnRel and COCO-Stuff). The models were then tested on OCD, the Cut-and-paste dataset, UnRel, and on a COCO-Stuff test split.
|
| 156 |
+
|
| 157 |
+
Behavioral Experiments: We evaluated human recognition on OCD and the Cut-and-paste dataset, as schematically illustrated in Fig. 3d, on Amazon Mechanical Turk (MTurk) [35]. We recruited 400 subjects per experiment, yielding $\approx 67,000$ trials. To avoid biases and potential memory effects, we took several precautions: (a) Only one target object from each class was selected; (b) Each subject saw each room only once; (c) The trial order was randomized.
|
| 158 |
+
|
| 159 |
+
Computer vision and most psychophysics experiments enforce N-way categorization (e.g. [31]). Here we used a more unbiased probing mechanism whereby subjects could use any word to describe the target object. We independently collected ground truth answers for each
|
| 160 |
+
|
| 161 |
+

|
| 162 |
+
(a) OCD
|
| 163 |
+
|
| 164 |
+

|
| 165 |
+
(b) Cut-and-paste
|
| 166 |
+
|
| 167 |
+

|
| 168 |
+
(c) UnRel
|
| 169 |
+
Figure 3: Datasets and psychophysics experiment scheme. (a-c) Example images for each dataset. The red box indicates the target location. In (a), two contextual modifications (gravity and size) are shown. In (b), the same target object is cut and pasted into either incongruent or congruent conditions. (c) consists of natural images. (d) Subjects were presented with a fixation cross (500 ms), followed by a bounding box indicating the target object location (1000 ms). The image was shown for 200 ms. After image offset, subjects typed one word to identify the target object.
|
| 170 |
+
|
| 171 |
+

|
| 172 |
+
(d) Schematic of human psychophysics experiment
|
| 173 |
+
|
| 174 |
+
object in a separate MTurk experiment with infinite viewing time and normal context conditions. These Mturk subjects did not participate in the main experiments. Answers in the main experiments were then deemed correct if they matched any of the ground truth responses [39].
|
| 175 |
+
|
| 176 |
+
A completely fair machine-human comparison is close to impossible since humans have decades of visual+ experience with the world. Despite this caveat, we find it instructive to show results for humans and models on the same images. We tried to mitigate the differences in training by focusing on the qualitative impact of contextual cues in perturbed conditions compared to the normal context condition. We also show human-model correlations to describe their relative trends across all conditions.
|
| 177 |
+
|
| 178 |
+
# 5. Results
|
| 179 |
+
|
| 180 |
+
# 5.1. Recognition in our OCD Dataset
|
| 181 |
+
|
| 182 |
+
Figure 4 (left) reports recognition accuracy for humans over the 6 context conditions (Sec. 4.2.1, Fig. 1) and 2 target object sizes (total of 12 conditions). Comparing the no-context condition (white) versus normal context (black), it is evident that contextual cues lead to improvement in recognition, especially for smaller objects, consistent with previous work [39].
|
| 183 |
+
|
| 184 |
+
Gravity violations led to a reduction in accuracy. For small objects, the gravity condition was even slightly worse than the no context condition; the unusual context can be misleading for humans. The effects were similar for the changes in object co-occurrences and relative object size. Objects were enlarged by a factor of 2, 3, or 4 in the relative size condition. Since the target object gets larger, and because of the improvement in recognition with object size, we would expect a higher accuracy in the size condition compared to normal context. However, increasing the size
|
| 185 |
+
|
| 186 |
+
of the target object while keeping all other objects intact, violates the basic statistics of expected relative sizes (e.g., we expect a chair to be larger than an apple). Thus, the drop in performance in the size condition is particularly remarkable and shows that violation of contextual cues can override basic object recognition.
|
| 187 |
+
|
| 188 |
+
Combining changes in gravity and in the statistics of object co-occurrences led to a pronounced drop in accuracy. Especially for the small target objects, violation of gravity and statistical co-occurrences led to performance well below that in the no context condition.
|
| 189 |
+
|
| 190 |
+
These results show that context can play a facilitatory role (compare normal versus no context), but context can also impair performance (compare gravity+co-occurrence versus no context). In other words, unorthodox contextual information hurts recognition.
|
| 191 |
+
|
| 192 |
+
Figure 4 (right) reports accuracies for CRTNet. Adding normal contextual information (normal context vs no context) led to an improvement of $4\%$ in performance for both small and large target objects. Remarkably, the CRTNet model captured qualitatively similar effects of contextual violations as those observed in humans. Even though the model performance was below humans in absolute terms (particularly for small objects), the basic trends associated with the role of contextual cues in humans can also be appreciated in the CRTNet results. Gravity, object co-occurrences, and relative object size changes led to a decrease in performance. As in the behavioral measurements, these effects were more pronounced for the small objects. For CRTNet, all conditions led to worse performance than the no context condition for small objects.
|
| 193 |
+
|
| 194 |
+
# 5.2. Recognition in the Cut-and-paste Dataset
|
| 195 |
+
|
| 196 |
+
Synthetic images offer the possibility to systematically control every aspect of the scene, but such artificial
|
| 197 |
+
|
| 198 |
+

|
| 199 |
+
Figure 4: The CRTNet model exhibits human-like recognition patterns across contextual variations in our OCD dataset. Different colors denote contextual conditions (Sec. 4.2.1, Fig. 1). We divided the trials into two groups based on target object sizes in degrees of visual angle (dva). Error bars denote the standard error of the mean (SEM).
|
| 200 |
+
|
| 201 |
+
<table><tr><td>OCD</td><td>Overall</td></tr><tr><td>CRTNet (ours)</td><td>0.89</td></tr><tr><td>Baselines</td><td></td></tr><tr><td>CATNet [39]</td><td>0.36</td></tr><tr><td>Faster R-CNN [28]</td><td>0.73</td></tr><tr><td>DenseNet [18]</td><td>0.66</td></tr><tr><td>Ablations</td><td></td></tr><tr><td>Ablated-SharedEncoder</td><td>0.84</td></tr><tr><td>Ablated-TargetOnly</td><td>0.89</td></tr><tr><td>Ablated-Unweighted</td><td>0.83</td></tr><tr><td>Ablated-NoDetachment</td><td>0.88</td></tr></table>
|
| 202 |
+
|
| 203 |
+
Table 1: Linear correlations between human and model performance over 12 contextual conditions.
|
| 204 |
+
|
| 205 |
+
images do not follow all the statistics of the natural world. Therefore, we further evaluated CRTNet and human performance in the naturalistic settings of the Cut-and-paste dataset [39] (see Table 2). The CRTNet model yielded results that were consistent with, and in many conditions better than, human performance. As observed in the human data, performance increases with object size. In addition, the effect of context was more pronounced for smaller objects (compare normal context (NC) versus minimal context (MC) conditions).
|
| 206 |
+
|
| 207 |
+
In accordance with previous work [39], compared to the minimal context condition, congruent contextual information (CG) typically enhanced recognition whereas incongruent context (IG) impaired performance. Although the congruent context typically shares similar correlations between objects and scene properties, pasting the object in a congruent context led to weaker enhancement than the normal context. This lower contextual facilitation may be due to erroneous relative sizes between objects, unnatural boundaries created by pasting, or contextual cues specific to each image. CRTNet was relatively oblivious to these effects and performance in the congruent condition was closer to that in the normal context condition whereas these differences were more striking for humans. In stark contrast, incongruent context consistently degraded recognition performance below the minimal context condition for both CRTNet and humans.
|
| 208 |
+
|
| 209 |
+
# 5.3. Recognition in the UnRel Dataset
|
| 210 |
+
|
| 211 |
+
The Cut-and-paste dataset introduces artifacts (such as unnatural boundaries and erroneous relative sizes) due to the cut-and-paste process. Therefore, we also evaluated CRTNet on the UnRel dataset [26]. We use the performance on the COCO-Stuff [4] test split as reference for normal context in natural images. CRTNet showed a slightly lower recognition accuracy in the out-of-context setting (Fig. 5).
|
| 212 |
+
|
| 213 |
+
# 5.4. Comparison with Baseline Models
|
| 214 |
+
|
| 215 |
+
Performance Evaluation: Although Faster R-CNN and CATNet leverage global contextual information, CRTNet outperformed both models, especially on small objects (OCD: Fig. 4 and Supp. Fig. S7-S8; Cut-and-Paste: Table2; UnRel: Fig. 5). Furthermore, Table 1 shows that CRTNet's performance pattern across the different OCD conditions is much more similar to the human performance pattern (in terms of correlations) than the other baseline models.
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
Figure 5: CRTNet surpasses all baselines in both normal (COCO-Stuff [4]) and out-of-context (UnRel [26]) conditions.
|
| 219 |
+
|
| 220 |
+
Architectural Differences: While all baseline models can rely on an intrinsic notion of spatial relations, CRTNet learns about spatial relations between target and context tokens through a positional embedding. A visualization of the learned positional embeddings (Supp. Fig. S1) shows that CRTNet learns image topology by encoding distances within the image in the similarity of positional embeddings.
|
| 221 |
+
|
| 222 |
+
In CATNet, the attention map iteratively modulates the extracted feature maps from the context image at each time step in a recurrent neural network, whereas CRTNet uses a stack of feedforward transformer decoder layers
|
| 223 |
+
|
| 224 |
+
<table><tr><td></td><td colspan="4">Size [0.5, 1] dva</td><td colspan="4">Size [1.75, 2.25] dva</td><td colspan="4">Size [3.5, 4.5] dva</td><td colspan="4">Size [7, 9] dva</td></tr><tr><td></td><td>NC</td><td>CG</td><td>IG</td><td>MC</td><td>NC</td><td>CG</td><td>IG</td><td>MC</td><td>NC</td><td>CG</td><td>IG</td><td>MC</td><td>NC</td><td>CG</td><td>IG</td><td>MC</td></tr><tr><td rowspan="2">Humans [39]</td><td>56.0</td><td>18.8</td><td>5.9</td><td>10.1</td><td>66.8</td><td>48.6</td><td>22.3</td><td>38.9</td><td>78.9</td><td>66.0</td><td>38.8</td><td>62.0</td><td>88.7</td><td>70.7</td><td>59.0</td><td>77.4</td></tr><tr><td>(2.8)</td><td>(2.3)</td><td>(1.3)</td><td>(1.7)</td><td>(2.7)</td><td>(2.8)</td><td>(2.4)</td><td>(2.8)</td><td>(2.4)</td><td>(2.7)</td><td>(2.6)</td><td>(2.8)</td><td>(1.7)</td><td>(2.6)</td><td>(2.8)</td><td>(2.3)</td></tr><tr><td rowspan="2">CRTNet (ours)</td><td>50.2</td><td>43.9</td><td>10.6</td><td>17.4</td><td>78.4</td><td>81.4</td><td>41.2</td><td>56.7</td><td>91.5</td><td>87.3</td><td>51.1</td><td>76.6</td><td>92.9</td><td>87.7</td><td>66.4</td><td>83.0</td></tr><tr><td>(2.8)</td><td>(2.8)</td><td>(1.7)</td><td>(2.1)</td><td>(3.0)</td><td>(2.8)</td><td>(3.5)</td><td>(3.6)</td><td>(1.1)</td><td>(1.3)</td><td>(1.9)</td><td>(1.6)</td><td>(0.9)</td><td>(1.2)</td><td>(1.7)</td><td>(1.4)</td></tr><tr><td rowspan="2">CATNet [39]</td><td>37.5</td><td>29.2</td><td>3.6</td><td>6.1</td><td>53.0</td><td>46.5</td><td>10.9</td><td>22.1</td><td>72.8</td><td>71.2</td><td>24.5</td><td>38.9</td><td>81.8</td><td>78.9</td><td>47.6</td><td>74.8</td></tr><tr><td>(4.0)</td><td>(2.4)</td><td>(1.0)</td><td>(2.0)</td><td>(4.1)</td><td>(2.5)</td><td>(1.6)</td><td>(3.6)</td><td>(3.6)</td><td>(2.4)</td><td>(2.2)</td><td>(3.9)</td><td>(3.0)</td><td>(2.1)</td><td>(2.6)</td><td>(3.5)</td></tr><tr><td rowspan="2">Faster R-CNN [28]</td><td>24.9</td><td>10.9</td><td>5.9</td><td>7.2</td><td>44.3</td><td>27.3</td><td>20.1</td><td>16.5</td><td>65.1</td><td>53.2</td><td>39.0</td><td>42.9</td><td>71.5</td><td>64.3</td><td>55.0</td><td>64.6</td></tr><tr><td>(2.4)</td><td>(1.7)</td><td>(1.3)</td><td>(1.4)</td><td>(3.6)</td><td>(3.2)</td><td>(2.9)</td><td>(2.7)</td><td>(1.8)</td><td>(1.9)</td><td>(1.9)</td><td>(1.9)</td><td>(1.6)</td><td>(1.7)</td><td>(1.8)</td><td>(1.7)</td></tr><tr><td rowspan="2">DenseNet [18]</td><td>13.1</td><td>10.0</td><td>11.2</td><td>12.5</td><td>45.4</td><td>42.3</td><td>39.7</td><td>46.4</td><td>67.1</td><td>62.3</td><td>55.4</td><td>67.1</td><td>74.9</td><td>67.2</td><td>63.5</td><td>74.9</td></tr><tr><td>(1.9)</td><td>(1.7)</td><td>(1.8)</td><td>(1.8)</td><td>(3.6)</td><td>(3.5)</td><td>(3.5)</td><td>(3.6)</td><td>(1.8)</td><td>(1.9)</td><td>(1.9)</td><td>(1.8)</td><td>(1.6)</td><td>(1.7)</td><td>(1.7)</td><td>(1.6)</td></tr></table>
|
| 225 |
+
|
| 226 |
+
Table 2: Recognition accuracy of humans, the CRTNet model, and three different baselines on the Cut-and-paste dataset [39]. There are 4 conditions for each size: normal context (NC), congruent context (GC), incongruent context (IG) and minimal context (MC) (Sec. 4.2.2). Bold highlights the best performance. Numbers in brackets denote the standard error of the mean.
|
| 227 |
+
|
| 228 |
+
with multi-head encoder-decoder attention. These decoder layers hierarchically integrate information via attention maps, modulating the target token features with context.
|
| 229 |
+
|
| 230 |
+
DenseNet takes cropped targets as input with only a few surrounding pixels of context. Its performance dramatically decreases for smaller objects, which also results in lower correlation with the human performance patterns. For example, in the Cut-and-paste dataset, CRTNet outperforms DenseNet by $30\%$ for normal context and small objects (Table 2) and in OCD, DenseNet achieves a correlation of 0.66 vs. 0.89 for CRTNet (Table 1).
|
| 231 |
+
|
| 232 |
+
# 5.5. Ablation Reveals Critical Model Components
|
| 233 |
+
|
| 234 |
+
We assessed the importance of design choices by training and testing ablated versions of CRTNet on the OCD dataset.
|
| 235 |
+
|
| 236 |
+
Shared Encoder: In the CRTNet model, we trained two separate encoders to extract features from target objects and the context respectively. Here, we enforced weight-sharing between these two encoders (Ablated-SharedEncoder) to assess whether the same features for both streams are sufficient to reason about context. The results (Table 1, Supp. Fig. S3) show that the ablated version achieved a lower recognition accuracy and lower correlation with the psychophysics results.
|
| 237 |
+
|
| 238 |
+
Recognition Based on Target or Context Alone: In the original CRTNet model, we use the confidence-weighted prediction $y_{p}$ . Here, we tested two alternatives: CRTNet relying only on the target object ( $y_{t}$ , Ablated-TargetOnly) and CRTNet relying only on contextual reasoning ( $y_{t,c}$ , Ablated-Unweighted). The original model benefits from proper contextual information compared to the target-only version but it is slightly more vulnerable to some of the context perturbations as would be expected. It consistently outperforms the context-only version demonstrating the usefulness of the confidence-modulation mechanism.
|
| 239 |
+
|
| 240 |
+
Joint Training of the Target Encoder: In Sec. 3.6, we use gradient detachments to make the training of the target encoder $E_{t}(\cdot)$ independent of $G_{t}(\cdot)$ such that it cannot force
|
| 241 |
+
|
| 242 |
+
the target encoder to learn more discriminative features. Here we remove this constraint (Ablated-NoDetachment, Supp. Fig. S6). The results are inferior to the ones of our original CRTNet, supporting the use of the gradient detachment method.
|
| 243 |
+
|
| 244 |
+
# 6. Conclusion
|
| 245 |
+
|
| 246 |
+
We introduced the OCD dataset and used it to systematically and quantitatively study the role of context in object recognition. OCD allowed us to rigorously scrutinize the multi-faceted aspects of how contextual cues influence visual recognition. We conducted experiments with computational models and complemented them with psychophysics studies to gauge human performance. Since the synthetic images in OCD can still be easily distinguished from real photographs, we addressed potential concerns due to the domain gap with experiments on two additional datasets consisting of real-world images.
|
| 247 |
+
|
| 248 |
+
We showed consistent results for humans and computational models over all three datasets. The results demonstrate that contextual cues can enhance visual recognition, but also that the "wrong" context can impair visual recognition capabilities both for humans and models.
|
| 249 |
+
|
| 250 |
+
We proposed the CRTNet model as a powerful and robust method to make use of contextual information in computer vision. CRTNet performs well compared to competitive baselines across a wide range of context conditions and datasets. In addition to its performance in terms of recognition accuracy, CRTNet's performance pattern was also found to resemble human behavior more than that of any baseline model.
|
| 251 |
+
|
| 252 |
+
Acknowledgements This work was supported by NIH R01EY026025 and the Center for Brains, Minds and Machines, funded by NSF STC award CCF-1231216. MZ is supported by a postdoctoral fellowship of the Agency for Science, Technology and Research. We thank Leonard Tang, Jeremy Schwartz, Seth Alter, Xavier Puig, Hanspeter Pfister, Jen Jen Chung, and Cesar Cadena for useful discussions and support.
|
| 253 |
+
|
| 254 |
+
# References
|
| 255 |
+
|
| 256 |
+
[1] Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. In Advances in neural information processing systems, pages 4502-4510, 2016. 2
|
| 257 |
+
[2] Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In Proceedings of the European Conference on Computer Vision (ECCV), pages 456-473, 2018. 2
|
| 258 |
+
[3] Ali Borji, Saeed Izadi, and Laurent Itti. ilab-20m: A large-scale controlled object dataset to investigate deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2221–2230, 2016. 2
|
| 259 |
+
[4] Holger Caesar, Jasper Uijlings, and Vittorio Ferrari. Coco-stuff: Thing and stuff classes in context. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1209-1218, 2018. 5, 7
|
| 260 |
+
[5] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In European Conference on Computer Vision, pages 213-229. Springer, 2020. 3
|
| 261 |
+
[6] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834-848, 2018. 2
|
| 262 |
+
[7] Myung Jin Choi, Antonio Torralba, and Alan S Willsky. Context models and out-of-context objects. Pattern Recognition Letters, 33(7):853-862, 2012. 2
|
| 263 |
+
[8] Wongun Choi and Silvio Savarese. A unified framework for multi-target tracking and collective activity recognition. In European Conference on Computer Vision, pages 215-230. Springer, 2012. 2
|
| 264 |
+
[9] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009. 2, 3
|
| 265 |
+
[10] Zhiwei Deng, Arash Vahdat, Hexiang Hu, and Greg Mori. Structure inference machines: Recurrent neural networks for analyzing relations in group activity recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4772-4781, 2016. 2
|
| 266 |
+
[11] Santosh K Divvala, Derek Hoiem, James H Hays, Alexei A Efros, and Martial Hebert. An empirical study of context in object detection. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1271-1278. IEEE, 2009. 2
|
| 267 |
+
[12] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. 3
|
| 268 |
+
|
| 269 |
+
[13] Nikita Dvornik, Julien Mairal, and Cordelia Schmid. Modeling visual context is key to augmenting object detection datasets. In Proceedings of the European Conference on Computer Vision (ECCV), pages 364-380, 2018. 2
|
| 270 |
+
[14] Golnaz Ghiasi, Yin Cui, Aravind Srinivas, Rui Qian, Tsung-Yi Lin, Ekin D. Cubuk, Quoc V. Le, and Barret Zoph. Simple copy-paste is a strong data augmentation method for instance segmentation, 2020. 2
|
| 271 |
+
[15] Josep M Gonfaus, Xavier Boix, Joost Van de Weijer, Andrew D Bagdanov, Joan Serrat, and Jordi Gonzalez. Harmony potentials for joint classification and segmentation. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 3280-3287. IEEE, 2010. 2
|
| 272 |
+
[16] Shirsendu Sukanta Halder, Jean-François Lalonde, and Raoul de Charette. Physics-based rendering for improving robustness to rain. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10203-10212, 2019. 2
|
| 273 |
+
[17] Hexiang Hu, Guang-Tong Zhou, Zhiwei Deng, Zicheng Liao, and Greg Mori. Learning structured inference neural networks with label relations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2960-2968, 2016. 2
|
| 274 |
+
[18] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700-4708, 2017. 3, 5, 7, 8
|
| 275 |
+
[19] Arthur Juliani, Vincent-Pierre Berges, Ervin Teng, Andrew Cohen, Jonathan Harper, Chris Elion, Chris Goy, Yuan Gao, Hunter Henry, Marwan Mattar, et al. Unity: A general platform for intelligent agents. arXiv preprint arXiv:1809.02627, 2018. 1, 2
|
| 276 |
+
[20] Lubor Ladicky, Chris Russell, Pushmeet Kohli, and Philip HS Torr. Graph cut based inference with co-occurrence statistics. In European Conference on Computer Vision, pages 239–253. Springer, 2010. 2
|
| 277 |
+
[21] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer, 2014. 2
|
| 278 |
+
[22] Stephen C Mack and Miguel P Eckstein. Object co-occurrence serves as a contextual cue to guide and facilitate visual search in a natural viewing environment. Journal of vision, 11(9):9–9, 2011. 2
|
| 279 |
+
[23] Spandan Madan, Timothy Henry, Jamell Dozier, Helen Ho, Nishchal Bhandari, Tomotake Sasaki, Frédo Durand, Hanspeter Pfister, and Xavier Boix. On the capability of neural networks to generalize to unseen category-pose combinations. arXiv preprint arXiv:2007.08032, 2020. 2
|
| 280 |
+
[24] Jiteng Mu, Weichao Qiu, Gregory D Hager, and Alan L Yuille. Learning from synthetic animals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12386-12395, 2020. 2
|
| 281 |
+
|
| 282 |
+
[25] Aude Oliva and Antonio Torralba. The role of context in object recognition. Trends in cognitive sciences, 11(12):520-527, 2007. 2
|
| 283 |
+
[26] Julia Peyre, Ivan Laptev, Cordelia Schmid, and Josef Sivic. Weakly-supervised learning of visual relations. In ICCV, 2017. 2, 5, 7
|
| 284 |
+
[27] Xavier Puig, Kevin Ra, Marko Boben, Jiaman Li, Tingwu Wang, Sanja Fidler, and Antonio Torralba. Virtualhome: Simulating household activities via programs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8494-8502, 2018. 1, 2, 5
|
| 285 |
+
[28] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497, 2015. 5, 7, 8
|
| 286 |
+
[29] Amir Rosenfeld, Richard Zemel, and John K Tsotsos. The elephant in the room. arXiv preprint arXiv:1808.03305, 2018. 2
|
| 287 |
+
[30] Jin Sun and David W Jacobs. Seeing what is not there: Learning context to determine where objects are missing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5716-5724, 2017. 2
|
| 288 |
+
[31] Hanlin Tang, Martin Schrimpf, William Lotter, Charlotte Moerman, Ana Paredes, Josue Ortega Caro, Walter Hardesty, David Cox, and Gabriel Kreiman. Recurrent computations for visual pattern completion. Proceedings of the National Academy of Sciences, 115(35):8835-8840, 2018. 5
|
| 289 |
+
[32] Antonio Torralba. Contextual priming for object detection. International journal of computer vision, 53(2):169-191, 2003. 2
|
| 290 |
+
[33] Antonio Torralba, Kevin P Murphy, and William T Freeman. Contextual models for object detection using boosted random fields. In Advances in neural information processing systems, pages 1401-1408, 2005. 2
|
| 291 |
+
[34] Antonio Torralba, Kevin P Murphy, William T Freeman, and Mark A Rubin. Context-based vision system for place and object recognition. In Computer Vision, IEEE International Conference on, volume 2, pages 273-273. IEEE Computer Society, 2003. 2
|
| 292 |
+
[35] Amazon Mechanical Turk. Amazon mechanical turk. Retrieved August, 17:2012, 2012. 5
|
| 293 |
+
[36] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762, 2017. 3
|
| 294 |
+
[37] Kevin Wu, Eric Wu, and Gabriel Kreiman. Learning scene gist with convolutional neural networks to improve object recognition. In Information Sciences and Systems (CISS), 2018 52nd Annual Conference on, pages 1-6. IEEE, 2018. 2
|
| 295 |
+
[38] Jian Yao, Sanja Fidler, and Raquel Urtasun. Describing the scene as a whole: Joint object detection, scene classification and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 702-709. IEEE, 2012. 2
|
| 296 |
+
[39] Mengmi Zhang, Claire Tseng, and Gabriel Kreiman. Putting visual object recognition in context. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12985-12994, 2020. 2, 4, 5, 6, 7, 8
|
whenpigsflycontextualreasoninginsyntheticandnaturalscenes/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:670a256363c84644320e83ae6528e9c44a5e2ec391e11b688cffb4ccbdf43e8b
|
| 3 |
+
size 421179
|
whenpigsflycontextualreasoninginsyntheticandnaturalscenes/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:35f3a73e0e386da1a64f1c216f564d1f5b5bbc7735c73ae0823ede482fd2c82e
|
| 3 |
+
size 380895
|
where2actfrompixelstoactionsforarticulated3dobjects/71ab66ad-201d-4d83-adce-eab14dcf3f2b_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:93c1a2052d1e000aad3dc04bbc07697637d8ff62e3a2893720f6e668d4862f68
|
| 3 |
+
size 83010
|
where2actfrompixelstoactionsforarticulated3dobjects/71ab66ad-201d-4d83-adce-eab14dcf3f2b_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dd5a05573757a8f9728e161d2001bf9445e623e80ea99022679dc5897c3c1a3f
|
| 3 |
+
size 103725
|
where2actfrompixelstoactionsforarticulated3dobjects/71ab66ad-201d-4d83-adce-eab14dcf3f2b_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8b3848ade7279f8a409f23e77993fb19bf1da3139068b128cd7647454769d7ac
|
| 3 |
+
size 2508352
|
where2actfrompixelstoactionsforarticulated3dobjects/full.md
ADDED
|
@@ -0,0 +1,329 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Where2Act: From Pixels to Actions for Articulated 3D Objects
|
| 2 |
+
|
| 3 |
+
Kaichun Mo\*1 Leonidas Guibas<sup>1</sup> Mustafa Mukadam<sup>2</sup> Abhinav Gupta<sup>2</sup> Shubham Tulsiani<sup>2</sup>
|
| 4 |
+
<sup>1</sup>Stanford University <sup>2</sup>Facebook AI Research
|
| 5 |
+
|
| 6 |
+
https://cs.stanford.edu/~kaichun/where2act
|
| 7 |
+
|
| 8 |
+
# Abstract
|
| 9 |
+
|
| 10 |
+
One of the fundamental goals of visual perception is to allow agents to meaningfully interact with their environment. In this paper, we take a step towards that long-term goal – we extract highly localized actionable information related to elementary actions such as pushing or pulling for articulated objects with movable parts. For example, given a drawer, our network predicts that applying a pulling force on the handle opens the drawer. We propose, discuss, and evaluate novel network architectures that given image and depth data, predict the set of actions possible at each pixel, and the regions over articulated parts that are likely to move under the force. We propose a learning-from-interaction framework with an online data sampling strategy that allows us to train the network in simulation (SAPIEN) and generalizes across categories. Check the website for code and data release.
|
| 11 |
+
|
| 12 |
+
# 1. Introduction
|
| 13 |
+
|
| 14 |
+
We humans interact with a plethora of objects around us in our daily lives. What makes this possible is our effortless understanding of what can be done with each object, where this interaction may occur, and precisely how our we must move to accomplish it – we can pull on a handle to open a drawer, push anywhere on a door to close it, flip a switch to turn a light on, or push a button to start the microwave. Not only do we understand what actions will be successful, we also intuitively know which ones will not e.g. pulling out a remote's button is probably not a good idea! In this work, our goal is to build a perception system which also has a similar understanding of general objects i.e. given a novel object, we want a system that can infer the myriad possible interactions<sup>1</sup> that one can perform with it.
|
| 15 |
+
|
| 16 |
+
The task of predicting possible interactions with objects is
|
| 17 |
+
|
| 18 |
+

|
| 19 |
+
Figure 1. The Proposed Where2Act Task. Given as input an articulated 3D object, we learn to propose the actionable information for different robotic manipulation primitives (e.g. pushing, pulling): (a) the predicted actionability scores over pixels; (b) the proposed interaction trajectories, along with (c) their success likelihoods, for a selected pixel highlighted in red. We show two high-rated proposals (left) and two with lower scores (right) due to interaction orientations and potential robot-object collisions.
|
| 20 |
+
|
| 21 |
+
one of central importance in both, the robotics and the computer vision communities. In robotics, the ability to predict feasible and desirable actions (e.g. a drawer can be pulled out) can help in motion planning, efficient exploration and interactive learning (sampling successful trials faster). On the other hand, the computer vision community has largely focused on inferring semantic labels (e.g. part segmentation, keypoint estimation) from visual input, but such passively learned representations provide limited understanding. More specifically, passive learning falls short on the ability of agents to perform actions, learn prediction models (forward dynamics) or even semantics in many cases (categories are more than often defined on affordances themselves!). Our paper takes a step forward in building a common perception system across diverse objects, while creating its own supervision about what actions maybe successful by actively interacting with the objects.
|
| 22 |
+
|
| 23 |
+
The first question we must tackle is how one can parametrize the predicted action space. We note that any
|
| 24 |
+
|
| 25 |
+
long-term interaction with an object can be considered as a sequence of short-term 'atomic' interactions like pushing and pulling. We therefore limit our work to considering the plausible short-term interactions that an agent can perform given the current state of the object. Each such atomic interaction can further be decomposed into where and how e.g. where on the cabinet should the robot pull (e.g. drawer handle or drawer surface) and how should the motion be executed (e.g. pull parallel or perpendicular to handle). This observation allows us to formulate our task as one of dense visual prediction. Given a depth or color image of an object, we learn to infer for each pixel/point, whether a certain primitive action can be performed at that location, and if so, how it should be executed.
|
| 26 |
+
|
| 27 |
+
Concretely, as we illustrate in Figure 1 (a), we learn a prediction network that given an atomic action type, can predict for each pixel: a) an 'actionability' score, b) action proposals, and c) success likelihoods. Our approach allows an agent to learn these by simply interacting with various objects, and recording the outcomes of its actions – labeling ones that cause a desirable state change as successful. While randomly interacting can eventually allow an agent to learn, we observe that it is not a very efficient exploration strategy. We therefore propose an on-policy data sampling strategy to alleviate this issue – by biasing the sampling towards actions the agents think are likely to succeed.
|
| 28 |
+
|
| 29 |
+
We use the SAPIEN [45] simulator for learning and testing our approach for six types of primitive interaction, covering 972 shapes over 15 commonly seen indoor object categories. We empirically show that our method successfully learns to predict possible actions for novel objects, and does so even for previously unseen categories.
|
| 30 |
+
|
| 31 |
+
In summary, our contributions are:
|
| 32 |
+
|
| 33 |
+
- we formulate the task of inferring affordances for manipulating 3D articulated objects by predicting per-pixel action likelihoods and proposals;
|
| 34 |
+
- we propose an approach that can learn from interactions while using adaptive sampling to obtain more informative samples;
|
| 35 |
+
- we create benchmarking environments in SAPIEN, and show that our network learns actionable visual representations that generalize to novel shapes and even unseen object categories.
|
| 36 |
+
|
| 37 |
+
# 2. Related Works
|
| 38 |
+
|
| 39 |
+
Predicting Semantic Representations. To successfully interact with a 3D object, an agent must be able to 'understand' it given some perceptual input. Several previous works in the computer vision community have pursued such an understanding in the form of myriad semantic labels. For example, predicting category labels [44, 3], or more fine-grained output such as semantic keypoints [6, 52] or part
|
| 40 |
+
|
| 41 |
+
segmentations [51, 24] can arguably yield more actionable representations e.g. allowing one to infer where 'handles', or 'buttons' etc. are. However, merely obtaining such semantic labels is clearly not sufficient on its own - an agent must also understand what needs to be done (e.g. an handle can be 'pulled' to open a door), and how that action should be accomplished i.e. what precise movements are required to 'pull open' the specific object considered.
|
| 42 |
+
|
| 43 |
+
Inferring Geometric and Physical Properties. Towards obtaining information more directly useful for how to act, some methods aim for representations that can be leveraged by classical robotics techniques. In particular, given geometric representations such as the shape [3, 22, 45, 47], along with the rigid object pose [46, 40, 39, 41, 4], articulated part pose [12, 50, 42, 49, 18, 45, 15] pose, or shape functional semantics [17, 13, 14], one can leverage off-the-shelf planners [23] or prediction systems [21] developed in the robotics community to obtain action trajectories. Additionally, the ability to infer physical properties e.g. material [36, 19], mass [36, 37] etc. can further make this process accurate. However, this two-stage procedure for acting, involving a perception system that predicts the object 'state', is not robust to prediction errors and makes the perception system produce richer output than possibly needed e.g. we don't need the full object state to pull out a drawer. Moreover, while this approach allows an agent to precisely execute an action, it sidesteps the issue of what action needs to/can be performed in the first place e.g. how does the agent understand a button can be pushed?
|
| 44 |
+
|
| 45 |
+
Learning Affordances from Passive Observations. One interesting approach to allow agents to learn what actions can be taken in a given context is to leverage (passive) observations – one can watch videos of other agents interacting with an object/scene and learn what is possible to do. This technique has been successfully used to learn scene affordances (sitting/standing) [8], possible contact locations [2], interaction hotspots [26], or even grasp patterns [10]. However, learning from passive observations is challenging due to several reasons e.g. the learning agent may differ in anatomy thereby requiring appropriate retargeting of demonstrations. An even more fundamental concern is the distribution shift common in imitation learning – while the agent may see examples of what can be done, it may not have seen sufficient negative examples or even sufficiently varied positive ones.
|
| 46 |
+
|
| 47 |
+
Learning Perception by Interaction. Most closely related to our approach is the line of work where an agent learns to predict affordances by generating its own training data – by interacting with the world and trying out possible actions. One important task where this approach has led to impressive results is that of planar grasping [30, 16], where the agent can learn which grasp actions would be successful. While subsequent approaches have attempted to apply these
|
| 48 |
+
|
| 49 |
+
ideas to other tasks like object segmentation [29, 20], planar pushing [31, 53], or non-planar grasps [25], these systems are limited in the complexity of the actions they model. In parallel, while some methods have striven for learning more complex affordances, they do so without modeling for the low-level actions required and instead frame the task as classification with oracle manipulators [27]. In our work, driven by availability of scalable simulation with diverse objects, we tackle the task of predicting affordances for richer interactions while also learning the low-level actions that induce the desired change.
|
| 50 |
+
|
| 51 |
+
# 3. Problem Statement
|
| 52 |
+
|
| 53 |
+
We formulate a new challenging problem Where2Act – inferring per-pixel ‘actionable information’ for manipulating 3D articulated objects. As illustrated in Fig. 1, given a 3D shape $S$ with articulated parts (e.g. the drawer and door on the cabinet), we perform per-pixel predictions for (a) where to interact, (b) how to interact, and (c) the interaction outcomes, under different action primitives.
|
| 54 |
+
|
| 55 |
+
In our framework, the input shape can be represented as a 2D RGB image or a 3D partial point cloud scan. We parametrize six types of short-term primitive actions (e.g. pushing, pulling) by the robot gripper pose in the $SE(3)$ space and consider an interaction successful if it interacts with the intended contact point on object validly and causes part motion to a considerable amount.
|
| 56 |
+
|
| 57 |
+
With respect to every action primitive, we predict for each pixel/point $p$ over the visible articulated parts of a 3D shape $S$ the following: (a) an actionability score $a_{p}$ measuring how likely the pixel $p$ is actionable; (b) a set of interaction proposals $\{R_{z|p} \in SO(3)\}_{z}$ to interact with the point $p$ , where $z$ is randomly drawn from a uniform Gaussian distribution; (c) one success likelihood score $s_{R|p}$ for each action proposal $R$ .
|
| 58 |
+
|
| 59 |
+
# 4. Method
|
| 60 |
+
|
| 61 |
+
We propose a learning-from-interaction approach to tackle this task. Taking as input a single RGB image or a partial 3D point cloud, we employ an encoder-decoder backbone to extract per-pixel features and design three decoding branches to predict the 'actionable information'.
|
| 62 |
+
|
| 63 |
+
# 4.1. Network Modules
|
| 64 |
+
|
| 65 |
+
Fig. 2 presents an overview of the proposed method. Our pipeline has four main components: a backbone feature extractor, an actionability scoring module, an action proposal module, and an action scoring module. We train an individual network for each primitive action.
|
| 66 |
+
|
| 67 |
+
Backbone Feature Extractor. We extract dense per-pixel features $\{f_p\}_p$ over the articulated parts. In the real-world robotic manipulation both RGB cameras or RGB-D scanners are used. Therefore, we evaluate both settings. For
|
| 68 |
+
|
| 69 |
+

|
| 70 |
+
Figure 2. Network Architecture. Our network takes an 2D image or a 3D partial scan as input and extract per-pixel feature $f_{p}$ using (a) Unet [35] for 2D images and (b) PointNet++ [32] for 3D point clouds. To decode the per-pixel actionable information, we propose three decoding heads: (c) an actionability scoring module $D_{a}$ that predicts a score $a_{p} \in [0,1]$ ; (d) an action proposal module $D_{r}$ that proposes multiple gripper orientations $R_{z|p} \in SO(3)$ sampled from a uniform Gaussian random noise $z$ ; (e) an action scoring module $D_{s}$ that rates the confidence $s_{R|p} \in [0,1]$ for each proposal.
|
| 71 |
+
|
| 72 |
+
the 2D case, we use the UNet architecture [35] and implementation [48] with a ResNet-18 [11] encoder, pretrained on ImageNet [5], and a symmetric decoder, trained from scratch, equipped with dense skip links between the encoder and decoder. For the 3D experiments, we use PointNet++ segmentation network [32] and implementation [43] with 4 set abstraction layers with single-scale grouping for the encoder and 4 feature propagation layers for the decoder. In both cases, we finally produce per-pixel feature $f_{p} \in \mathbb{R}^{128}$ .
|
| 73 |
+
|
| 74 |
+
Actionability Scoring Module. For each pixel $p$ , we predict an actionability score $a_{p} \in [0,1]$ indicating how likely the pixel is actionable. We employ a Multilayer Perceptron (MLP) $D_{a}$ with one hidden layer of size 128 to implement this module. The network outputs one scalar $a_{p}$ after applying the Sigmoid function, where a higher score indicates a higher chance for successful interaction. Namely,
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
a _ {p} = D _ {a} \left(f _ {p}\right) \tag {1}
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
Action Proposal Module. For each pixel $p$ , we employ an action proposal module that is essentially formulated as a conditional generative model to propose high-recall interaction parameters $\{R_{z|p}\}_{z}$ . We employ another MLP $D_r$ with one hidden layer of size 128 to implement this module. Taking as input the current pixel feature $f_p$ and a randomly sampled Gaussian noise vector $z \in \mathbb{R}^{10}$ , the network $D_p$ predicts a gripper end-effector 3-DoF orientation $R_{z|p}$ in the $SO(3)$ space
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
R _ {z \mid p} = D _ {r} \left(f _ {p}, z\right). \tag {2}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
We represent the 3-DoF gripper orientation by the first two orthonormal axes in the $3 \times 3$ rotation matrix, following the proposed 6D-rotation representation in [54].
|
| 87 |
+
|
| 88 |
+

|
| 89 |
+
(a)
|
| 90 |
+
|
| 91 |
+

|
| 92 |
+
push
|
| 93 |
+
|
| 94 |
+

|
| 95 |
+
push-up
|
| 96 |
+
|
| 97 |
+

|
| 98 |
+
push-left
|
| 99 |
+
|
| 100 |
+

|
| 101 |
+
pull
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
pull-up
|
| 105 |
+
(b)
|
| 106 |
+
Figure 3. (a) Our interactive simulation environment: we show the local gripper frame by the red, green and blue axes, which corresponds to the leftward, upward and forward directions respectively; (b) Six types of action primitives parametrized in the $SE(3)$ space: we visualize each pre-programmed motion trajectory by showing the three key frames, where the time steps go from the transparent grippers to the solid ones, with $3 \times$ exaggerated motion ranges.
|
| 107 |
+
|
| 108 |
+

|
| 109 |
+
pull-left
|
| 110 |
+
|
| 111 |
+
Action Scoring Module. For an action proposal $R$ at pixel $p$ , we finally estimate a likelihood $s_{R|p} \in [0,1]$ for the success of the interaction parametrized by tuple $(p,R) \in SE(3)$ . One can use the predicted action scores to filter out low-rated proposals, or sort all the candidates according to the predicted scores, analogous to predicting confident scores for bounding box proposals in the object detection literature.
|
| 112 |
+
|
| 113 |
+
This network module $D_{s}$ is also parametrized by an MLP with one hidden layer of size 128. Given an input tuple $(f_p,R)$ , we produce a scalar $s_{R|p} \in [0,1]$ ,
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
s _ {R | p} = D _ {s} \left(f _ {p}, R\right), \tag {3}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
where $s_{R|p} > 0.5$ indicates a positive action proposal $R$ during the testing time.
|
| 120 |
+
|
| 121 |
+
# 4.2. Collecting Training Data
|
| 122 |
+
|
| 123 |
+
It is extremely difficult to collect human annotations for the predictions that we are pursuing. Instead, we propose to let the agent learn by interacting with objects in simulation. As illustrated in Fig. 3 (a), we create an interactive environment using SAPIEN [45] where a random 3D articulated object is selected and placed at the center of the scene. A flying robot gripper can then interact with the object by specifying a position $p \in \mathbb{R}^3$ over the shape geometry surface with an end-effector orientation $R \in SO(3)$ . We consider six types of action primitives (Fig. 3 (b)) with pre-programmed interaction trajectories, each of which is parameterized by the gripper pose $(p,R) \in SE(3)$ at the beginning.
|
| 124 |
+
|
| 125 |
+
We employ a hybrid data sampling strategy where we first sample large amount of offline random interaction trajectories to bootstrap the learning and then adaptively sample online interaction data points based on the network predictions for more efficient learning.
|
| 126 |
+
|
| 127 |
+
Offline Random Data Sampling. We sample most of the training data in an offline fashion as we can efficiently sample several data points by parallelizing simulation environ-
|
| 128 |
+
|
| 129 |
+
ments across multiple CPUs. For each data point, we first randomly sample a position $p$ over the ground-truth articulated parts to interact with. Then, we randomly sample an interaction orientation $R \in SO(3)$ from the hemisphere above the tangent plane around $p$ , oriented consistently to the positive normal direction, and try to query the outcome of the interaction parametrized by $(p,R)$ . We mark orientation $Rs$ from the other hemisphere as negative without trials since the gripper cannot be put inside the object volume.
|
| 130 |
+
|
| 131 |
+
In our experiments, for each primitive action type, we sample enough offline data points that give roughly 10,000 positive trajectories to bootstrap the training. Though parallelization allows large scale offline data collection, such random data sampling strategy is highly inefficient in querying the interesting interaction regions to obtain positive data points. Statistics show that only $1\%$ data samples are positive for the pulling primitive. This renders a big data imbalance challenge in training the network and also hints that the most likely pullable regions occupy really small regions, which is practically very reasonable since we most likely pull out doors/drawers by their handles.
|
| 132 |
+
|
| 133 |
+
Online Adaptive Data Sampling. To address the sampling-inefficiency of offline random data sampling, we propose to conduct online adaptive data sampling that samples more over the subregions that the network that we are learning predicts to be highly possible to be successful.
|
| 134 |
+
|
| 135 |
+
In our implementation, during training the network for the action scoring module $D_{s}$ with data sample $(p,R)$ , we infer the action score predictions $\{s_{R|p_i}\}_i$ over all pixels $\{p_i\}_i$ on articulated parts. Then, we sample one position $p_*$ to conduct an additional interaction trial $(p_*,R)$ according to the SoftMax normalized probability distribution over all possible interaction positions. By performing such online adaptive data sampling, we witness an increasingly growing positive data sample rate since the network is actively choosing to sample more around the likely successful subregions. Also, we observe that sampling more data around the interesting regions helps network learn better features at distinguishing the geometric subtleties around the small but crucial interactive parts, such as handles, buttons and knobs.
|
| 136 |
+
|
| 137 |
+
While this online data sampling is beneficial, it may lead to insufficient exploration of novel regions. Thus, in our final online data sampling procedure, we sample $50\%$ of data trajectories from the random data sampling and sample the other $50\%$ from prediction-biased adaptive data sampling.
|
| 138 |
+
|
| 139 |
+
# 4.3. Training and Losses
|
| 140 |
+
|
| 141 |
+
We empirically find it beneficial to first train the action scoring module $D_{s}$ and then train the three decoders jointly. We maintain separate data queues for feeding same amount of positive and negative interaction data in each training batch to address the data imbalance issue. We also balance sampling shapes from different object categories equally.
|
| 142 |
+
|
| 143 |
+
Action Scoring Loss. Given a batch of $B$ interaction data points $\{(S_i, p_i, R_i, r_i)\}_i$ where $r_i = 1$ (positive) and $r_i = 0$ (negative) denote the ground-truth interaction outcome, we train the action scoring module $D_s$ with the standard binary cross entropy loss
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
\mathcal {L} _ {s} = - \frac {1}{B} \sum_ {i} r _ {i} \log \left(D _ {s} \left(f _ {p _ {i} \mid S _ {i}}, R _ {i}\right)\right) + \tag {4}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\left(1 - r _ {i}\right) \log \left(1 - D _ {s} \left(f _ {p _ {i} \mid S _ {i}}, R _ {i}\right)\right).
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
Action Proposal Loss. We leverage the Min-of-N strategy [7] to train the action proposal module $D_r$ , which is essentially a conditional generative model that maps a pixel $p$ to a distribution of possible interaction proposals $R_{z|p}$ 's. For each positive interaction data, we train $D_r$ to be able to propose one candidate that matches the ground-truth interaction orientation. Concretely, for a batch of $B$ interaction data points $\{(S_i, p_i, R_i, r_i)\}_i$ where $r_i = 1$ , the Min-of-N loss is defined as
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
\mathcal {L} _ {r} = \frac {1}{B} \sum_ {i} \min _ {j = 1, \dots , 1 0 0} d i s t \left(\left(D _ {r} \left(f _ {p _ {i} | S _ {i}}; z _ {j}\right)\right), R _ {i}\right), \tag {5}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
where $z_{j}$ is i.i.d randomly sampled Gaussian vectors and $dist$ denotes a distance function between two 6D-rotation representations, as defined in [54].
|
| 160 |
+
|
| 161 |
+
Actionability Scoring Loss. We define the 'actionability' score corresponding to a pixel as the expected success rate when executing a random proposal generated by our proposal generation module $D_r$ . While one could estimate this by actually executing these proposals, we note that our learned action scoring module $D_s$ allows us to directly evaluate this. We train our 'actionability' scoring module to learn this expected score across proposals from $D_r$ , namely,
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
\hat {a} _ {p _ {i} \mid S _ {i}} = \frac {1}{1 0 0} \sum_ {j = 1, \dots , 1 0 0} D _ {s} \left(f _ {p _ {i} \mid S _ {i}}, D _ {r} \left(f _ {p _ {i} \mid S _ {i}}, z _ {j}\right)\right); \tag {6}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\mathcal {L} _ {a} = \frac {1}{B} \sum_ {i} \left(D _ {a} \left(f _ {p _ {i} | S _ {i}}\right) - \hat {a} _ {p _ {i} | S _ {i}}\right) ^ {2}.
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
This strategy is computationally efficient since we are reusing the 100 proposals computed in Eq. 5. Also, since the action proposal network $D_r$ is optimized to cover all successful interaction orientations, the estimation $\hat{a}_{p_i|S_i}$ is expected to be approaching 1 when most of the proposals are successful and 0 when the position $p$ is not actionable (i.e., all proposals are rated with low success likelihood scores).
|
| 172 |
+
|
| 173 |
+
Final Loss. After adjusting the relative loss scales to the same level, we obtain the final objective function
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
\mathcal {L} = \mathcal {L} _ {s} + \mathcal {L} _ {r} + 1 0 0 \times \mathcal {L} _ {a}. \tag {7}
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
# 5. Experiments
|
| 180 |
+
|
| 181 |
+
We set up an interactive simulation environment in SAPIEN [45] and benchmark performance of the proposed method both qualitatively and quantitatively. Results also show that the networks learn representations that can generalize to novel unseen object categories and real-world data.
|
| 182 |
+
|
| 183 |
+
# 5.1. Framework and Settings
|
| 184 |
+
|
| 185 |
+
We describe our simulation environment, simulation assets and action primitive settings in details below.
|
| 186 |
+
|
| 187 |
+
Environment. Equipped with a large-scale PartNet-Mobility dataset, SAPIEN [45] provides a physics-rich simulation environment that supports robot actuators interacting with 2,346 3D CAD models from 46 object categories. Every articulated 3D object is annotated with articulated parts of interests (e.g. doors, handles, buttons) and their part motion information (i.e. motion types, motion axes and motion ranges). SAPIEN integrates one of the state-of-the-art physical simulation engines NVIDIA PhysX [28] to simulate physics-rich interaction details.
|
| 188 |
+
|
| 189 |
+
We adapt SAPIEN to set up our interactive environment for our task. For each interaction simulation, we first randomly select one articulated 3D object, which is zero-centered and normalized within a unit-sphere, and place it in the scene. We initialize the starting pose for each articulated part, with a $50\%$ chance at its rest state (e.g. a fully closed drawer) and $50\%$ chance with a random pose (e.g. a half-opened drawer). Then, we use a Franka Panda Flying gripper with 2 fingers as the robot actuator, which has 8 degree-of-freedom (DoF) in total, including the 3 DoF position, 3 DoF orientation and 2 DoF for the 2 fingers. The flying gripper can be initialized at any position and orientation with a closed or open gripper. We observe the object in the scene from an RGB-D camera with known intrinsics that is mounted 5-unit far from the object, facing the object center, located at the upper hemisphere of the object with a random azimuth $[0^{\circ}, 360^{\circ})$ and a random altitude $[30^{\circ}, 60^{\circ}]$ . Fig. 3 (a) visualizes one example of our simulation environment.
|
| 190 |
+
|
| 191 |
+
Simulation Assets. We conduct our experiments using 15 selected object categories in the PartNet-Mobility dataset, after removing the objects that are either too small (e.g. pens, USB drives), requiring multi-gripper collaboration (e.g. pliers, scissors), or not making sense for robot to manipulate (e.g. keyboards, fans, clocks). We use 10 categories for training and reserve the rest 5 categories only for testing, in order to analyze if the learned representations can generalize to novel unseen categories. In total, there are 773 objects in the training categories and 199 objects in the testing ones. We further divide the training split into 586 training shapes and 187 testing shapes, and only use the training shapes from the training categories to train our networks. Table 1 summarizes the detailed statistics of the final data splits.
|
| 192 |
+
|
| 193 |
+
<table><tr><td>Train-Cats</td><td>All</td><td>Box</td><td>Door</td><td>Faucet</td><td>Kettle</td><td>Microwave</td></tr><tr><td>Train-Data</td><td>586</td><td>20</td><td>23</td><td>65</td><td>22</td><td>9</td></tr><tr><td>Test-Data</td><td>187</td><td>8</td><td>12</td><td>19</td><td>7</td><td>3</td></tr><tr><td></td><td colspan="2">Fridge</td><td>Cabinet</td><td>Switch</td><td>TrashCan</td><td>Window</td></tr><tr><td></td><td colspan="2">32</td><td>270</td><td>53</td><td>52</td><td>40</td></tr><tr><td></td><td colspan="2">11</td><td>75</td><td>17</td><td>17</td><td>18</td></tr><tr><td>Test-Cats</td><td>All</td><td>Bucket</td><td>Pot</td><td>Safe</td><td>Table</td><td>Washing</td></tr><tr><td>Test-Data</td><td>199</td><td>36</td><td>23</td><td>29</td><td>95</td><td>16</td></tr></table>
|
| 194 |
+
|
| 195 |
+
Table 1. We summarize the shape counts in our dataset. Here, pot and washing are short for kitchen pot and washing machine.
|
| 196 |
+
|
| 197 |
+
Action Settings. We consider six types of primitive actions: pushing, pushing-up, pushing-left, pulling, pulling-up, pulling-left. All action primitives are pre-programmed with hard-coded motion trajectories and parameterized by the gripper starting pose $R \in SE(3)$ in the camera space. At the beginning of each interaction simulation, we initialize the robot gripper slightly above a surface position $p$ of interest approaching from orientation $R$ .
|
| 198 |
+
|
| 199 |
+
We visualize the action primitives in Fig. 3 (b). For pushing, a closed gripper first touches the surface and then pushes 0.05 unit-length forward. For pushing-up and pushing-left, the closed gripper moves forward by 0.04 unit-length to contact the surface and scratches the surface to the up or left direction for 0.05 unit-length. For pulling, an open gripper approaches the surface by moving forward for 0.04 unit-length, performs grasping by closing the gripper, and pulls backward for 0.05 unit-length. For pulling-up and pulling-left, after the attempted grasping, the gripper moves along the up or left direction for 0.05 unit-length. Notice that the pulling actions may degrade to the pushing ones if the gripper grasps nothing but just touches/scratches the surface.
|
| 200 |
+
|
| 201 |
+
We define one interaction trial successful if the part that we are interacting with exhibits a considerable part motion along the intended direction. The intended direction is the forward or backward direction for pushing and pulling, and is the up or left direction for the rest four directional action types. We measure the contact point motion direction and validate it if the angle between the intended direction and the actual motion direction is smaller than $60^{\circ}$ . For thresholding the part motion magnitude, we measure the gap between the starting and end part 1-DoF pose and claim it successful if the gap is greater than 0.01 unit-length or 0.5 relative to the total motion range of the articulated part.
|
| 202 |
+
|
| 203 |
+
# 5.2. Metrics and Baselines
|
| 204 |
+
|
| 205 |
+
We propose two quantitative metrics for evaluating performance of our proposed method, compared with three baseline methods and one ablated version of our method.
|
| 206 |
+
|
| 207 |
+
Evaluation Metrics. A natural set of metrics is to evaluate the binary classification accuracy of the action scoring network $D_{s}$ . We conduct random interaction simulation trials in the SAPIEN environment over testing shapes with random camera viewpoints, interaction positions and orientations. With random interactions, there are many more failed inter
|
| 208 |
+
|
| 209 |
+
action trials than the successful ones. Thus, we report the F-score balancing precision and recall for the positive class.
|
| 210 |
+
|
| 211 |
+
To evaluate the action proposal quality, we introduce a sample-success-rate metric $ssr$ that measures what fraction of interaction trials proposed by the networks are successful. This metric jointly evaluates all the three network modules and mimics the final use case of proposing meaningful actions when a robot actuator wants to operate the object. Given an input image or partial point cloud, we first use the actionability scoring module $D_{a}$ to sample a pixel to interact, then apply the action proposal module $D_{r}$ to generate several interaction proposals, and finally sample one interaction orientation according to the ratings from the action scoring module $D_{s}$ . For both sampling operations, we normalize the predicted scores over all pixels or all action proposals as a probabilistic distribution and sample among the ones with absolute probability greater than 0.5. For the proposal generation step, we sample 100 action proposals per pixel by randomly sampling the inputs to $D_{r}$ from a uniform Gaussian distribution. For each sampled interaction proposal, we apply it in the simulator and observe the ground-truth outcome. We define the final measure as below.
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
s s r = \frac {\# \text {s u c c e s s f u l p r o p o s a l s}}{\# \text {t o t a l p r o p o s a l s}} \tag {8}
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
Baselines and Ablation Study. Since we are the first to propose and formulate the task, there is no previous work for us to compare with. To validate the effectiveness of the proposed method and provide benchmarks for the proposed task, we compare to three baseline methods and one ablated version of our method:
|
| 218 |
+
|
| 219 |
+
- B-Random: a random agent that always gives a random proposal or scoring;
|
| 220 |
+
- B-Normal: a method that replaces the feature $f_{p}$ in our method with the 3-dimensional ground-truth normal, with the same decoding heads, losses and training scheme as our proposed method;
|
| 221 |
+
- B-PCPNet: a method that replaces the feature $f_{p}$ in our method with predicted normals and curvatures, which are estimated using PCPNet [9] on 3D partial point cloud inputs, with the same decoding heads, losses and training scheme as our proposed method;
|
| 222 |
+
- Ours w/o OS: an ablated version of our method that removes the online adaptive data sampling strategy and only samples online data with random exploration. We make sure that the total number of interaction queries is the same as our method for a fair comparison.
|
| 223 |
+
|
| 224 |
+
Among baseline methods, B-Random presents lower bound references for the proposed metrics, while B-Normal is designed to validate that our network learns localized but interaction-oriented features, rather than simple geometric features such as normal directions. B-PCPNet further
|
| 225 |
+
|
| 226 |
+

|
| 227 |
+
Figure 4. We visualize the per-pixel action scoring predictions over the articulated parts given certain gripper orientations for interaction. In each set of results, the left two shapes shown in blue are testing shapes from training categories, while the middle two shapes highlighted in dark red are shapes from testing categories. The rightmost columns show the results for the 2D experiments.
|
| 228 |
+
|
| 229 |
+
<table><tr><td></td><td></td><td>F-score (%)</td><td>Sample-Succ (%)</td></tr><tr><td rowspan="5">pushing</td><td>B-Random</td><td>12.02 / 7.40</td><td>6.80 / 3.79</td></tr><tr><td>B-Normal</td><td>31.94 / 17.39</td><td>21.72 / 11.57</td></tr><tr><td>B-PCPNet</td><td>32.01 / 18.21</td><td>18.04 / 9.15</td></tr><tr><td>2D-ours</td><td>34.21 / 22.68</td><td>21.36 / 10.58</td></tr><tr><td>3D-ours</td><td>43.76 / 26.61</td><td>28.54 / 14.74</td></tr><tr><td rowspan="5">pushing-up</td><td>B-Random</td><td>4.92 / 3.31</td><td>2.70 / 1.62</td></tr><tr><td>B-Normal</td><td>13.37 / 7.56</td><td>8.93 / 4.81</td></tr><tr><td>B-PCPNet</td><td>15.08 / 7.50</td><td>8.09 / 4.86</td></tr><tr><td>2D-ours</td><td>15.35 / 8.69</td><td>8.70 / 5.76</td></tr><tr><td>3D-ours</td><td>21.64 / 11.20</td><td>12.06 / 6.56</td></tr><tr><td rowspan="5">pushing-left</td><td>B-Random</td><td>6.18 / 4.05</td><td>3.08 / 2.26</td></tr><tr><td>B-Normal</td><td>18.52 / 10.72</td><td>11.59 / 5.72</td></tr><tr><td>B-PCPNet</td><td>18.66 / 10.81</td><td>9.69 / 4.43</td></tr><tr><td>2D-ours</td><td>18.93 / 12.04</td><td>11.68 / 7.22</td></tr><tr><td>3D-ours</td><td>26.04 / 16.06</td><td>15.95 / 9.31</td></tr><tr><td rowspan="5">pulling</td><td>B-Random</td><td>2.26 / 3.19</td><td>1.07 / 1.55</td></tr><tr><td>B-Normal</td><td>6.20 / 8.02</td><td>3.79 / 4.18</td></tr><tr><td>B-PCPNet</td><td>7.19 / 8.57</td><td>4.15 / 3.71</td></tr><tr><td>2D-ours</td><td>7.04 / 8.98</td><td>4.07 / 4.70</td></tr><tr><td>3D-ours</td><td>10.95 / 12.88</td><td>7.51 / 7.85</td></tr><tr><td rowspan="5">pulling-up</td><td>B-Random</td><td>5.01 / 4.13</td><td>2.22 / 2.41</td></tr><tr><td>B-Normal</td><td>13.64 / 9.40</td><td>8.67 / 6.08</td></tr><tr><td>B-PCPNet</td><td>14.73 / 10.98</td><td>8.37 / 6.19</td></tr><tr><td>2D-ours</td><td>15.74 / 12.88</td><td>9.71 / 7.10</td></tr><tr><td>3D-ours</td><td>22.24 / 16.28</td><td>13.53 / 9.28</td></tr><tr><td rowspan="5">pulling-left</td><td>B-Random</td><td>5.83 / 4.16</td><td>3.06 / 2.31</td></tr><tr><td>B-Normal</td><td>17.52 / 10.51</td><td>11.14 / 5.82</td></tr><tr><td>B-PCPNet</td><td>18.89 / 11.00</td><td>9.12 / 5.19</td></tr><tr><td>2D-ours</td><td>16.20 / 10.16</td><td>10.15 / 6.05</td></tr><tr><td>3D-ours</td><td>25.22 / 14.49</td><td>14.25 / 7.10</td></tr></table>
|
| 230 |
+
|
| 231 |
+
validates that our network learns geometric features more than local normals and curvatures. An ablated version Ours w/o OS further proves the improvement provided by the proposed online adaptive data sampling (OS) strategy.
|
| 232 |
+
|
| 233 |
+
Table 2. Quantitative Evaluations and Comparisons. We compare our method to three baseline methods (i.e. B-Random, B-Normal and B-PCPNet). In each entry, we report the numbers evaluated over the testing shapes from training categories (before slash) and the shapes from the test categories (after slash). We use 3D- and 2D- to indicate the data input modalities. The baseline methods are not sensitive to the input kinds. We observe that 3D-ours achieves the best performance.
|
| 234 |
+
|
| 235 |
+
<table><tr><td></td><td></td><td>F-score (%)</td><td>Sample-Succ (%)</td></tr><tr><td rowspan="2">pushing</td><td>Ours w/o OS</td><td>40.54 / 25.66</td><td>25.18 / 11.76</td></tr><tr><td>Ours</td><td>43.76 / 26.61</td><td>28.54 / 14.74</td></tr><tr><td rowspan="2">pushing-up</td><td>Ours w/o OS</td><td>21.03 / 11.57</td><td>12.88 / 6.43</td></tr><tr><td>Ours</td><td>21.64 / 11.20</td><td>12.06 / 6.56</td></tr><tr><td rowspan="2">pushing-left</td><td>Ours w/o OS</td><td>24.71 / 14.91</td><td>14.12 / 7.02</td></tr><tr><td>Ours</td><td>26.04 / 16.06</td><td>15.95 / 9.31</td></tr><tr><td rowspan="2">pulling</td><td>Ours w/o OS</td><td>10.28 / 12.09</td><td>5.62 / 5.87</td></tr><tr><td>Ours</td><td>10.95 / 12.88</td><td>7.51 / 7.85</td></tr><tr><td rowspan="2">pulling-up</td><td>Ours w/o OS</td><td>20.51 / 13.70</td><td>12.18 / 7.96</td></tr><tr><td>Ours</td><td>22.24 / 16.28</td><td>13.53 / 9.28</td></tr><tr><td rowspan="2">pulling-left</td><td>Ours w/o OS</td><td>23.41 / 15.07</td><td>14.23 / 6.81</td></tr><tr><td>Ours</td><td>25.22 / 14.49</td><td>14.25 / 7.10</td></tr></table>
|
| 236 |
+
|
| 237 |
+
Table 3. Ablation Study. We compare our method to an ablated version, where we remove the online adaptive sampling. It is clear to see that using online data sampling (OS) helps in most cases.
|
| 238 |
+
|
| 239 |
+
# 5.3. Results and Analysis
|
| 240 |
+
|
| 241 |
+
Table 2 presents quantitative comparisons of our method to the three baselines, where we observe that 3D-Ours performs the best. Our network learns localized but interaction-oriented geometric features, performing better than B-Normal and B-PCPNet which only use normals and curvatures as features. Though lacking of explicit 3D information and thus performing worse than the 3D networks, we observe competitive results from the 2D-version 2D-Ours. Our networks also learn representations that generalize successfully to unseen novel object categories. The ablation study shown in Table 3 further validates that the online data sampling (OS) strategy helps boost the performance.
|
| 242 |
+
|
| 243 |
+
We visualize the predicted action scores in Fig. 4, where we clearly see that given different primitive action types and gripper orientations, our network learns to extract geometric features that are action-specific and gripper-aware. For example, for pulling, we predict higher scores over high-curvature regions such as part boundaries and handles, while for pushing, almost all flat surface pixels belonging to a pushable part
|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
Figure 5. We visualize (a) the actionability scoring and (b) the action proposal predictions on an example cabinet with a door that can be slipped to open. We show the top-4 rated proposals.
|
| 247 |
+
|
| 248 |
+
are equally highlighted and the pixels around handles are reasonably predicted to be not pushable due to object-gripper collisions. For the directional interaction types, it is obvious to see that the action direction is of important consideration to the predictions. For instance, the pushing-left agent learns to scratch the side surface pixels of the cabinet drawers to close them (third-row, the leftmost column) and the pulling-up one learns to lift up the handle of a bucket by grasping it and pulling up (second-row, the rightmost column).
|
| 249 |
+
|
| 250 |
+
We illustrate the estimated actionability scores over the articulated part for the six action primitives in Fig. 1 and Fig. 5. We obverse that the door/drawer handles and part boundaries are highlighted, especially for pulling and pulling-up, where reasonable interaction proposals are produced. Fig. 1 clearly shows the different actionability predictions over the door pixels, where the door surface pixels are in general pushable, while only the handle part is pullable. Fig. 5 presents comparisons among the four directional interaction types. We observe similar actionability predictions for pushing-up and pushing-left but different orientation proposals for interacting with the same pixel. Interestingly, comparing pulling-up and pulling-left, we see that the operation of grasping is in function for pulling-up, making it more actionable than pulling-left when attempting to slide open the cabinet door. We present more results in the supplementary.
|
| 251 |
+
|
| 252 |
+
# 5.4. Real-world Data
|
| 253 |
+
|
| 254 |
+
We directly applied our networks trained on synthetic data to real-world data. Fig. 6 presents our predictions of the action scoring module on real 3D scans and 2D images, which shows promising results that our networks transfer the learned actionable information to real-world data.
|
| 255 |
+
|
| 256 |
+

|
| 257 |
+
Figure 6. We visualize our action scoring predictions given certain gripper orientations over three real-world 3D scans from the Replica dataset [38] and Google Scanned Objects [34, 33], as well as on two 2D real images [1]. Results are shown over all pixels because of no access to the articulated part masks. Though there is no guarantee for the predictions over pixels outside the articulated parts, the results make sense if we allow motion for the entire objects.
|
| 258 |
+
|
| 259 |
+
# 6. Conclusion
|
| 260 |
+
|
| 261 |
+
We formulate a new challenging task to predict per-pixel actionable information for manipulating articulated 3D objects. Using an interactive environment built upon SAPIEN and the PartNet-Mobility dataset, we train neural networks that map pixels to actions: for each pixel on a articulated part of an object, we predict the actionability of the pixel related to six primitive actions and propose candidate interaction parameters. We present extensive quantitative evaluations and qualitative analysis of the proposed method. Results show that the learned knowledges are highly localized and thus generalizable to novel unseen object categories.
|
| 262 |
+
|
| 263 |
+
Limitations and Future Works. We see many possibilities for future extensions. First, our network takes single frame visual input, which naturally introduces ambiguities for the solution spaces if the articulated part mobility information cannot be fully determined from a single snapshot. Second, we limit our experiments to six types of action primitives with hard-coded motion trajectories. One future extension is to generalize the framework to free-form interactions. Finally, our method does not explicitly model the part segmentation and part motion axis, which may be incorporated in the future works to further improve the performance.
|
| 264 |
+
|
| 265 |
+
# Acknowledgements
|
| 266 |
+
|
| 267 |
+
This work was supported primarily by Facebook during Kaichun's internship, while also by NSF grant IIS-1763268, a Vannevar Bush faculty fellowship, and an Amazon AWS ML award. We thank Yuzhe Qin and Fanbo Xiang for providing helps on setting up the SAPIEN environment.
|
| 268 |
+
|
| 269 |
+
# References
|
| 270 |
+
|
| 271 |
+
[1] Two real images. https://www.uline.com/Product/Detail/S-4080/Corrugated-Boxes-200-Test/8-x-6-x-4-Corrugated-Boxes, https://donbaraton.es/p/mesa-de-noche-3-cajones-acabado-effecto-madera-alabama. 8
|
| 272 |
+
[2] Samarth Brahmbhatt, Cusuh Ham, Charles C Kemp, and James Hays. Contactdb: Analyzing and predicting grasp contact via thermal imaging. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8709-8719, 2019. 2
|
| 273 |
+
[3] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. 2
|
| 274 |
+
[4] Dengsheng Chen, Jun Li, Zheng Wang, and Kai Xu. Learning canonical shape space for category-level 6d object pose and size estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11973-11982, 2020. 2
|
| 275 |
+
[5] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009. 3
|
| 276 |
+
[6] Helin Dutagaci, Chun Pan Cheung, and Afzal Godil. Evaluation of 3d interest point detection techniques via human-generated ground truth. The Visual Computer, 28(9):901-917, 2012. 2
|
| 277 |
+
[7] Haoqiang Fan, Hao Su, and Leonidas J Guibas. A point set generation network for 3d object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 605-613, 2017. 5
|
| 278 |
+
[8] David F. Fouhey, Vincent Delaitre, Abhinav Gupta, Alexei A. Efros, Ivan Laptev, and Josef Sivic. People watching: Human actions as a cue for single-view geometry. In ECCV, 2012. 2
|
| 279 |
+
[9] Paul Guerrero, Yanir Kleiman, Maks Ovsjanikov, and Niloy J Mitra. Pcnet learning local shape properties from raw point clouds. In Computer Graphics Forum, volume 37, pages 75-85. Wiley Online Library, 2018. 6
|
| 280 |
+
[10] Henning Hamer, Juergen Gall, Thibaut Weise, and Luc Van Gool. An object-dependent hand pose prior from sparse training data. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 671-678. IEEE, 2010. 2
|
| 281 |
+
[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 3
|
| 282 |
+
[12] Ruizhen Hu, Wenchao Li, Oliver Van Kaick, Ariel Shamir, Hao Zhang, and Hui Huang. Learning to predict part mobility from a single static snapshot. ACM Transactions on Graphics (TOG), 36(6):1-13, 2017. 2
|
| 283 |
+
[13] Ruizhen Hu, Manolis Savva, and Oliver van Kaick. Functionality representations and applications for shape analysis. In Computer Graphics Forum, volume 37, pages 603-624. Wiley Online Library, 2018. 2
|
| 284 |
+
|
| 285 |
+
[14] Ruizhen Hu, Zihao Yan, Jingwen Zhang, Oliver Van Kaick, Ariel Shamir, Hao Zhang, and Hui Huang. Predictive and generative neural networks for object functionality. arXiv preprint arXiv:2006.15520, 2020. 2
|
| 286 |
+
[15] Ajinkya Jain, Rudolf Lioutikov, and Scott Niekum. Screwnet: Category-independent articulation model estimation from depth images using screw theory. arXiv preprint arXiv:2008.10518, 2020. 2
|
| 287 |
+
[16] Mohi Khansari, Daniel Kappler, Jianlan Luo, Jeff Bingham, and Mrinal Kalakrishnan. Action image representation: Learning scalable deep grasping policies with zero real world data. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 3597-3603. IEEE, 2020. 2
|
| 288 |
+
[17] Vladimir G Kim, Siddhartha Chaudhuri, Leonidas Guibas, and Thomas Funkhouser. Shape2pose: Human-centric shape analysis. ACM Transactions on Graphics (TOG), 33(4):1-12, 2014. 2
|
| 289 |
+
[18] Xiaolong Li, He Wang, Li Yi, Leonidas J Guibas, A Lynn Abbott, and Shuran Song. Category-level articulated object pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3706-3715, 2020. 2
|
| 290 |
+
[19] Hubert Lin, Melinos Averkiou, Evangelos Kalogerakis, Balazs Kovacs, Siddhant Ranade, Vladimir Kim, Siddhartha Chaudhuri, and Kavita Bala. Learning material-aware local descriptors for 3d shapes. In 2018 International Conference on 3D Vision (3DV), pages 150-159. IEEE, 2018. 2
|
| 291 |
+
[20] Martin Lohmann, Jordi Salvador, Aniruddha Kembhavi, and Roozbeh Mottaghi. Learning about objects by learning to interact with them. Advances in Neural Information Processing Systems, 33, 2020. 3
|
| 292 |
+
[21] Jeffrey Mahler, Florian T Pokorny, Brian Hou, Melrose Roderick, Michael Laskey, Mathieu Aubry, Kai Kohlhoff, Torsten Kröger, James Kuffner, and Ken Goldberg. Dex-net 1.0: A cloud-based network of 3d objects for robust grasp planning using a multi-armed bandit model with correlated rewards. In IEEE International Conference on Robotics and Automation (ICRA), pages 1957–1964. IEEE, 2016. 2
|
| 293 |
+
[22] Roberto Martin-Martín, Clemens Eppner, and Oliver Brock. The rbo dataset of articulated objects and interactions. The International Journal of Robotics Research, 38(9):1013-1019, 2019. 2
|
| 294 |
+
[23] Andrew T Miller, Steffen Knoop, Henrik I Christensen, and Peter K Allen. Automatic grasp planning using shape primitives. In 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), volume 2, pages 1824-1829. IEEE, 2003. 2
|
| 295 |
+
[24] Kaichun Mo, Shilin Zhu, Angel X Chang, Li Yi, Subarna Tripathi, Leonidas J Guibas, and Hao Su. Partnet: A large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 909–918, 2019. 2
|
| 296 |
+
[25] Adithyavairavan Murali, Arsalan Mousavian, Clemens Eppner, Chris Paxton, and Dieter Fox. 6-dof grasping for target-driven object manipulation in clutter. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 6232-6238. IEEE, 2020. 3
|
| 297 |
+
|
| 298 |
+
[26] Tushar Nagarajan, Christoph Feichtenhofer, and Kristen Grauman. Grounded human-object interaction hotspots from video. In Proceedings of the IEEE International Conference on Computer Vision, pages 8688-8697, 2019. 2
|
| 299 |
+
[27] Tushar Nagarajan and Kristen Grauman. Learning affordance landscapes for interaction exploration in 3d environments. In NeurIPS, 2020. 3
|
| 300 |
+
[28] Nvidia. PhysX physics engine. https://www.geforce.com/hardware/technology/physx.5
|
| 301 |
+
[29] Deepak Pathak, Yide Shentu, Dian Chen, Pulkit Agrawal, Trevor Darrell, Sergey Levine, and Jitendra Malik. Learning instance segmentation by interaction. In CVPR Workshop on Benchmarks for Deep Learning in Robotic Vision, 2018. 3
|
| 302 |
+
[30] Lerrel Pinto and Abhinav Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In 2016 IEEE international conference on robotics and automation (ICRA), pages 3406-3413. IEEE, 2016. 2
|
| 303 |
+
[31] Lerrel Pinto and Abhinav Gupta. Learning to push by grasping: Using multiple tasks for effective learning. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 2161-2168. IEEE, 2017. 3
|
| 304 |
+
[32] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems, pages 5099-5108, 2017. 3
|
| 305 |
+
[33] Google Research. Black decker stainless steel toaster 4 slice. https://fuel.ignitionrobotics.org/1.0/GoogleResearch/models/Black_decker_Steinless_Steel_Toaster_4_Slice, 2020.8
|
| 306 |
+
[34] Google Research. Victor reversible bookend. https://app.ignitionrobotics.org/GoogleResearch/fuel/models/Victor_Reversible_Bookend, 2020.8
|
| 307 |
+
[35] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234-241. Springer, 2015. 3
|
| 308 |
+
[36] Manolis Savva, Angel X Chang, and Pat Hanrahan. Semantically-enriched 3d models for common-sense knowledge. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 24-31, 2015. 2
|
| 309 |
+
[37] Lin Shao, Angel X Chang, Hao Su, Manolis Savva, and Leonidas Guibas. Cross-modal attribute transfer for rescaling 3d models. In 2017 International Conference on 3D Vision (3DV), pages 640-648. IEEE, 2017. 2
|
| 310 |
+
[38] Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, et al. The replica dataset: A digital replica of indoor spaces. arXiv preprint arXiv:1906.05797, 2019. 8
|
| 311 |
+
[39] Martin Sundermeyer, Zoltan-Csaba Marton, Maximilian Durner, Manuel Brucker, and Rudolph Triebel. Implicit 3d orientation learning for 6d object detection from rgb images. In Proceedings of the European Conference on Computer Vision (ECCV), pages 699-715, 2018. 2
|
| 312 |
+
|
| 313 |
+
[40] Jonathan Tremblay, Thang To, Balakumar Sundaralingam, Yu Xiang, Dieter Fox, and Stan Birchfield. Deep object pose estimation for semantic robotic grasping of household objects. arXiv preprint arXiv:1809.10790, 2018. 2
|
| 314 |
+
[41] He Wang, Srinath Sridhar, Jingwei Huang, Julien Valentin, Shuran Song, and Leonidas J Guibas. Normalized object coordinate space for category-level 6d object pose and size estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2642-2651, 2019. 2
|
| 315 |
+
[42] Xiaogang Wang, Bin Zhou, Yahao Shi, Xiaowu Chen, Qinping Zhao, and Kai Xu. Shape2motion: Joint analysis of motion parts and attributes from 3d shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8876-8884, 2019. 2
|
| 316 |
+
[43] Erik Wijmans. Pointnet++ pytorch. https://github.com/erikwijmans/Pointnet2_PyTorch, 2018.3
|
| 317 |
+
[44] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguuang Zhang, Xiaou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1912-1920, 2015. 2
|
| 318 |
+
[45] Fanbo Xiang, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao Zhu, Fangchen Liu, Minghua Liu, Hanxiao Jiang, Yifu Yuan, He Wang, et al. Sapien: A simulated part-based interactive environment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11097-11107, 2020. 2, 4, 5
|
| 319 |
+
[46] Yu Xiang, Tanner Schmidt, Venkatraman Narayanan, and Dieter Fox. PoseCNN: A convolutional neural network for 6d object pose estimation in cluttered scenes. arXiv preprint arXiv:1711.00199, 2017. 2
|
| 320 |
+
[47] Xianghao Xu, David Charatan, Sonia Raychaudhuri, Hanxiao Jiang, Mae Heitmann, Vladimir Kim, Siddhartha Chaudhuri, Manolis Savva, Angel Chang, and Daniel Ritchie. Motion annotation programs: A scalable approach to annotating kinematic articulations in large 3d shape collections. 3DV, 2020. 2
|
| 321 |
+
[48] Pavel Yakubovskiy. Segmentation models pytorch. https://github.com/qubvel/segmentation_models.pytorch, 2020.3
|
| 322 |
+
[49] Zihao Yan, Ruizhen Hu, Xingguang Yan, Luanmin Chen, Oliver Van Kaick, Hao Zhang, and Hui Huang. RPM-net: Recurrent prediction of motion and parts from point cloud. ACM Transactions on Graphics (TOG), 38(6):240, 2019. 2
|
| 323 |
+
[50] Li Yi, Haibin Huang, Difan Liu, Evangelos Kalogerakis, Hao Su, and Leonidas Guibas. Deep part induction from articulated object pairs. arXiv preprint arXiv:1809.07417, 2018. 2
|
| 324 |
+
[51] Li Yi, Vladimir G Kim, Duygu Ceylan, I-Chao Shen, Mengyan Yan, Hao Su, Cewu Lu, Qixing Huang, Alla Sheffer, and Leonidas Guibas. A scalable active framework for region annotation in 3d shape collections. ACM Transactions on Graphics (ToG), 35(6):1-12, 2016. 2
|
| 325 |
+
[52] Yang You, Yujing Lou, Chengkun Li, Zhoujun Cheng, Liangwei Li, Lizhuang Ma, Cewu Lu, and Weiming Wang. Keypointnet: A large-scale 3d keypoint dataset aggregated
|
| 326 |
+
|
| 327 |
+
from numerous human annotations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. 2
|
| 328 |
+
[53] Andy Zeng, Shuran Song, Stefan Welker, Johnny Lee, Alberto Rodriguez, and Thomas Funkhouser. Learning synergies between pushing and grasping with self-supervised deep reinforcement learning. 2018. 3
|
| 329 |
+
[54] Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, and Hao Li. On the continuity of rotation representations in neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5745-5753, 2019. 3, 5
|
where2actfrompixelstoactionsforarticulated3dobjects/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1ff3adb3da5da24409b78f79c9f01d78b02e1259cc0da2f0720140aff4af65e4
|
| 3 |
+
size 537500
|
where2actfrompixelstoactionsforarticulated3dobjects/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7eeaeae44de599a6cae0d12dbb7c75c20ae32583943b61bd45405093a51acdc2
|
| 3 |
+
size 410312
|
whereareyouheadingdynamictrajectorypredictionwithexpertgoalexamples/22c0ccfa-3d70-4f8c-acb0-c3b91ebcaf58_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7759c002275c2d3acc6f4fe0ca3608e8684fc6acd5c7e7f174bd01671b0981fe
|
| 3 |
+
size 87550
|
whereareyouheadingdynamictrajectorypredictionwithexpertgoalexamples/22c0ccfa-3d70-4f8c-acb0-c3b91ebcaf58_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d397874dbaee0cc87eb1ca037436eb06fff0f648bd484449b2dae8e1582117c2
|
| 3 |
+
size 107321
|