SlowGuess commited on
Commit
eff004c
·
verified ·
1 Parent(s): d6f8d00

Add Batch 83454d05-dde8-4b21-9e9b-fde2fbbac837

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. abandoningthebayerfiltertoseeinthedark/3848649e-055f-4b68-8c02-b79b217c5ead_content_list.json +3 -0
  2. abandoningthebayerfiltertoseeinthedark/3848649e-055f-4b68-8c02-b79b217c5ead_model.json +3 -0
  3. abandoningthebayerfiltertoseeinthedark/3848649e-055f-4b68-8c02-b79b217c5ead_origin.pdf +3 -0
  4. abandoningthebayerfiltertoseeinthedark/full.md +336 -0
  5. abandoningthebayerfiltertoseeinthedark/images.zip +3 -0
  6. abandoningthebayerfiltertoseeinthedark/layout.json +3 -0
  7. abodatasetandbenchmarksforrealworld3dobjectunderstanding/165b1a6d-d340-46a6-8a6b-371c91127e9b_content_list.json +3 -0
  8. abodatasetandbenchmarksforrealworld3dobjectunderstanding/165b1a6d-d340-46a6-8a6b-371c91127e9b_model.json +3 -0
  9. abodatasetandbenchmarksforrealworld3dobjectunderstanding/165b1a6d-d340-46a6-8a6b-371c91127e9b_origin.pdf +3 -0
  10. abodatasetandbenchmarksforrealworld3dobjectunderstanding/full.md +259 -0
  11. abodatasetandbenchmarksforrealworld3dobjectunderstanding/images.zip +3 -0
  12. abodatasetandbenchmarksforrealworld3dobjectunderstanding/layout.json +3 -0
  13. abpnadaptiveblendpyramidnetworkforrealtimelocalretouchingofultrahighresolutionphoto/0836d8e6-e6e7-4830-9182-ed9613a4b549_content_list.json +3 -0
  14. abpnadaptiveblendpyramidnetworkforrealtimelocalretouchingofultrahighresolutionphoto/0836d8e6-e6e7-4830-9182-ed9613a4b549_model.json +3 -0
  15. abpnadaptiveblendpyramidnetworkforrealtimelocalretouchingofultrahighresolutionphoto/0836d8e6-e6e7-4830-9182-ed9613a4b549_origin.pdf +3 -0
  16. abpnadaptiveblendpyramidnetworkforrealtimelocalretouchingofultrahighresolutionphoto/full.md +374 -0
  17. abpnadaptiveblendpyramidnetworkforrealtimelocalretouchingofultrahighresolutionphoto/images.zip +3 -0
  18. abpnadaptiveblendpyramidnetworkforrealtimelocalretouchingofultrahighresolutionphoto/layout.json +3 -0
  19. acceleratingdetrconvergenceviasemanticalignedmatching/93bf8c77-36d2-40ed-b94e-4226b5a6b6a2_content_list.json +3 -0
  20. acceleratingdetrconvergenceviasemanticalignedmatching/93bf8c77-36d2-40ed-b94e-4226b5a6b6a2_model.json +3 -0
  21. acceleratingdetrconvergenceviasemanticalignedmatching/93bf8c77-36d2-40ed-b94e-4226b5a6b6a2_origin.pdf +3 -0
  22. acceleratingdetrconvergenceviasemanticalignedmatching/full.md +273 -0
  23. acceleratingdetrconvergenceviasemanticalignedmatching/images.zip +3 -0
  24. acceleratingdetrconvergenceviasemanticalignedmatching/layout.json +3 -0
  25. acceleratingvideoobjectsegmentationwithcompressedvideo/8ac3eeea-b648-47f2-8829-27ee10c0ec1a_content_list.json +3 -0
  26. acceleratingvideoobjectsegmentationwithcompressedvideo/8ac3eeea-b648-47f2-8829-27ee10c0ec1a_model.json +3 -0
  27. acceleratingvideoobjectsegmentationwithcompressedvideo/8ac3eeea-b648-47f2-8829-27ee10c0ec1a_origin.pdf +3 -0
  28. acceleratingvideoobjectsegmentationwithcompressedvideo/full.md +309 -0
  29. acceleratingvideoobjectsegmentationwithcompressedvideo/images.zip +3 -0
  30. acceleratingvideoobjectsegmentationwithcompressedvideo/layout.json +3 -0
  31. accurate3dbodyshaperegressionusingmetricandsemanticattributes/55090ba8-e47f-43e1-99bb-5d8bd426be4e_content_list.json +3 -0
  32. accurate3dbodyshaperegressionusingmetricandsemanticattributes/55090ba8-e47f-43e1-99bb-5d8bd426be4e_model.json +3 -0
  33. accurate3dbodyshaperegressionusingmetricandsemanticattributes/55090ba8-e47f-43e1-99bb-5d8bd426be4e_origin.pdf +3 -0
  34. accurate3dbodyshaperegressionusingmetricandsemanticattributes/full.md +379 -0
  35. accurate3dbodyshaperegressionusingmetricandsemanticattributes/images.zip +3 -0
  36. accurate3dbodyshaperegressionusingmetricandsemanticattributes/layout.json +3 -0
  37. acplanticurriculumpseudolabellingforsemisupervisedmedicalimageclassification/d6176429-3843-42b4-b235-8f7c679bc7da_content_list.json +3 -0
  38. acplanticurriculumpseudolabellingforsemisupervisedmedicalimageclassification/d6176429-3843-42b4-b235-8f7c679bc7da_model.json +3 -0
  39. acplanticurriculumpseudolabellingforsemisupervisedmedicalimageclassification/d6176429-3843-42b4-b235-8f7c679bc7da_origin.pdf +3 -0
  40. acplanticurriculumpseudolabellingforsemisupervisedmedicalimageclassification/full.md +275 -0
  41. acplanticurriculumpseudolabellingforsemisupervisedmedicalimageclassification/images.zip +3 -0
  42. acplanticurriculumpseudolabellingforsemisupervisedmedicalimageclassification/layout.json +3 -0
  43. acquiringadynamiclightfieldthroughasingleshotcodedimage/16a63fb6-6ee6-4340-b18b-b692eed45d81_content_list.json +3 -0
  44. acquiringadynamiclightfieldthroughasingleshotcodedimage/16a63fb6-6ee6-4340-b18b-b692eed45d81_model.json +3 -0
  45. acquiringadynamiclightfieldthroughasingleshotcodedimage/16a63fb6-6ee6-4340-b18b-b692eed45d81_origin.pdf +3 -0
  46. acquiringadynamiclightfieldthroughasingleshotcodedimage/full.md +335 -0
  47. acquiringadynamiclightfieldthroughasingleshotcodedimage/images.zip +3 -0
  48. acquiringadynamiclightfieldthroughasingleshotcodedimage/layout.json +3 -0
  49. activelearningbyfeaturemixing/eca455da-15ce-41de-885d-779d788a4780_content_list.json +3 -0
  50. activelearningbyfeaturemixing/eca455da-15ce-41de-885d-779d788a4780_model.json +3 -0
abandoningthebayerfiltertoseeinthedark/3848649e-055f-4b68-8c02-b79b217c5ead_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:064ea12b38d66b718d54a9a3fff53d980fc261af4d7fd0cf40bd0e318fd8650d
3
+ size 74763
abandoningthebayerfiltertoseeinthedark/3848649e-055f-4b68-8c02-b79b217c5ead_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f06c91e30afaeb821f8708559267666d90bc272eba239e628c45649a7ee8f3b
3
+ size 93730
abandoningthebayerfiltertoseeinthedark/3848649e-055f-4b68-8c02-b79b217c5ead_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33c7b8b10210209445c7e0dec28917b9503167498d6e3d4607700fc951e86c19
3
+ size 7752964
abandoningthebayerfiltertoseeinthedark/full.md ADDED
@@ -0,0 +1,336 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Abandoning the Bayer-Filter to See in the Dark
2
+
3
+ Xingbo Dong $^{1,3*†}$ Wanyan Xu $^{1,2*†}$ Zhihui Miao $^{1,2†}$ Lan Ma $^{1}$
4
+ Chao Zhang $^{1}$ Jiewen Yang $^{1}$ Zhe Jin $^{4}$ Andrew Beng Jin Teoh $^{3}$ Jiajun Shen $^{1}$ $^{1}$ TCL AI Lab $^{2}$ Fuzhou University $^{3}$ Yonsei University $^{4}$ Anhui University
5
+
6
+ {xingbo.dong,bjteoh}@yonsei.ac.kr,{208527051,208527090}@fzu.edu.cn,{sjj,rubyma}@tcl.com
7
+
8
+ # Abstract
9
+
10
+ Low-light image enhancement, a pervasive but challenging problem, plays a central role in enhancing the visibility of an image captured in a poor illumination environment. Due to the fact that not all photons can pass the Bayer-Filter on the sensor of the color camera, in this work, we first present a De-Bayer-Filter simulator based on deep neural networks to generate a monochrome raw image from the colored raw image. Next, a fully convolutional network is proposed to achieve the low-light image enhancement by fusing colored raw data with synthesized monochrome data. Channel-wise attention is also introduced to the fusion process to establish a complementary interaction between features from colored and monochrome raw images. To train the convolutional networks, we propose a dataset with monochrome and color raw pairs named Mono-Colored Raw paired dataset (MCR) collected by using a monochrome camera without Bayer-Filter and a color camera with Bayer-Filter. The proposed pipeline takes advantages of the fusion of the virtual monochrome and the color raw images, and our extensive experiments indicate that significant improvement can be achieved by leveraging raw sensor data and data-driven learning. The project is available at https://github.com/TCL-AILab/Abandon_Bayer-Filter_See_in_the_Dark
11
+
12
+ # 1. Introduction
13
+
14
+ For a digitalized image, the quality of the image could be severely degraded due to the color distortions and noise under poor illumination conditions such as indoors, at night, or under improper camera exposure parameters.
15
+
16
+ Long exposure time and high ISO (sensitivity to light) are often leveraged in low-light environments to preserve visual quality. However, overwhelming exposure leads to motion blur and unbalanced overexposing, and high ISO
17
+
18
+ ![](images/1cab1449950eb7d774bb9bdf076e465028a6274c282b0622af9fcf9589a03b44.jpg)
19
+ Figure 1. Overview of the proposed pipeline. We propose to generate monochrome raw data by a learned De-Bayer-Filter module. Then, a dual branch neural network is designed to bridge monochrome and colored raw to achieve the low-light image enhancement task.
20
+
21
+ amplifies the noise. Though the camera's flash provides exposure compensation for the insufficient light, it is not suitable for long-distance shots, and also introduces color distortions and artifacts. On the other hand, various algorithms have been reported to enhance the low-light image. Recently, deep neural network models have been utilized to solve the low-light image restoration problem, such as DeepISP [22] and Seeing In the Dark (SID) [3].
22
+
23
+ However, those algorithms are restricted in the image processing pipeline, as the photons capture rate and quantum efficiency are usually overlooked. In general, high photons capture rate can improve the image's visual quality significantly. One of the typical examples is the RYYB-based color filter, which can capture $40\%$ more photons than the Bayer-RGGB-based color filter<sup>1</sup>. Hence, the RYYB-based color filter can achieve better performance naturally.
24
+
25
+ Bayer filter removal is another plausible way to improve the photons capture rate. The Bayer filter is an array of
26
+
27
+ many tiny color filters that cover the image sensor to render color information (see Fig. 1). By removing the Bayer filter and sacrificing the color information, the image sensor can capture more photons, which contributes to clearer visibility under poor illumination conditions compared to a camera with a Bayer filter (see Fig. 2 (a)). On the other hand, dual-cameras are one of the trends of today's smart devices such as smartphones. One type of dual-camera set is the combination of monochrome sensor and colored sensor<sup>2</sup>. The monochrome sensor is usually identical to the colored sensor but without a Bayer array filter. Such a dual-camera setting can achieve better imaging quality in a low-light environment due to more photons received by the sensor. However, an additional cost is needed for the extra camera equipped. Therefore, for most mobile phones that are only equipped with color cameras, preserving the same low-light image quality produced by dual-camera set while only using a single color camera is a challenging task.
28
+
29
+ Motivated by the above discussion, we proposed a fully end-to-end convolutional neural model that consists of two modules (as illustrated in Fig. 1): a De-Bayer-Filter (DBF) module and a Dual Branch Low-light Enhancement module (DBLE). The DBF module learns to restore the monochrome raw image from the color camera raw data without requiring a monochrome camera. DBLE is designed to fuse colored raw with synthesized monochrome raw data and generate enhanced RGB images.
30
+
31
+ In addition, we propose a dataset to train our end-to-end framework. To the best of our knowledge, no existing dataset contains monochrome and colored raw image pairs captured by an identical type of sensors. To establish such a dataset, one camera with a Bayer filter is used to capture color-patterned raw images. Another camera without a Bayer-filter but equipped with the same type of sensor is utilized to capture monochrome raw images (see Fig. 2(b)). The dataset is collected under various scenes, and each colored raw image has a corresponding monochrome raw image captured with identical exposure settings.
32
+
33
+ Our contributions can be summarised as:
34
+
35
+ 1. A De-Bayer-Filter model is proposed to simulate a virtual monochrome camera and synthesize monochrome raw image data from the colored raw input. The DBF module aims at predicting the monochrome raw images, which resembles a monochrome sensor capability. To the best of our knowledge, we are the first to explore removing the Bayer-filter using a deep learning-based model.
36
+ 2. We design a Dual Branch Low-light Enhancement model that is used to fuse the colored raw with the synthesized monochrome raw to produce the final monitor-ready RGB images. To bridge the domain gap
37
+
38
+ between colored raw and monochrome raw, a channelwise attention layer is adopted to build an interaction between both domains for better restoration performance. The experiment results indicate that state-of-the-art performance can be achieved.
39
+
40
+ 3. We propose the MCR, a dataset of colored raw and monochrome raw image pairs, captured with the same exposure setting. It is publicly opened as a research material to facilitate community utilization and will be released after publication.
41
+
42
+ # 2. Related Work
43
+
44
+ To achieve the low-light image enhancement task, tremendous methods have been attempted. These methods can be categorized as histogram equalization (HE) methods [1, 15, 29], Retinex methods [5, 26, 28, 33], defogging model methods [4], statistical methods [16, 17, 23], and machine learning methods [7, 11, 30, 34]. Recently, several works on raw image data have been proposed [3, 9, 22]. Our work also falls into this category; we will mainly discuss the existing methods of raw-based approaches in this section.
45
+
46
+ Deep neural networks have emerged as an approach to achieve the digital camera's image signal processing tasks. In 2018, a fully convolutional model, namely DeepISP, was proposed in [22] to learn mapping from the raw low-light mosaiced image to the final RGB image with high visual quality. To simulate the digital camera's image signal processing (ISP) pipeline, DeepISP first extracts low-level features and performs local modifications, then extracts higher-level features and performs a global correction. L1 norm and the multi-scale structural similarity index (MS-SSIM) loss in the Lab domain are utilized for training the DeepISP to simulate the ISP pipeline. When DeepISP is only used for low-level imaging tasks such as denoising and demosaicing, L2 loss will be utilized. Hence, both low-level tasks and higher-level tasks such as demosaicing, denoising, and color correction can be achieved by DeepISP. The results in [22] suggest superior performance compared with manufacturer ISP.
47
+
48
+ Another parallel work similar to DeepISP, namely seeing in the dark (SID), was proposed in [3]. In SID, a U-net [21] network is utilized to operate directly on raw sensor data and output human visual ready RGB images. A dataset of raw short-exposure low-light images with corresponding long-exposure reference images was established to train the model. Compared with the traditional image processing pipeline, significant improvement can be made as the results in [3] indicate. Later, an improved version of SID was proposed in [27]. Using a similar U-net network as the backbone, the authors introduced wavelet transform to conduct down-sampling and up-sampling operations. Perceptual loss [10] is used in [27] to train the network to better
49
+
50
+ ![](images/7c8ff4e5060975149d8538df9b9f03b8de442986ff60d8b2de13e71b343d8445.jpg)
51
+ Figure 2. (a) Images captured by color and monochrome cameras under different exposure time.; (b) Monochrome and color cameras used in our work for data collection.
52
+
53
+ ![](images/237bf4bca7c1b25e3a1b1f392a6487d1b9a60151fb3ab01f4b0533204504761a.jpg)
54
+
55
+ restore details in the image. In DID [18], the authors proposed replacing the U-net in SID with residual learning to better preserve the information from image features. Similar raw-based approaches have also been applied to videos, such as [2,9].
56
+
57
+ In addition to the raw-based approach, frequency-based decomposition has also been explored on the low-light image enhancement task. In [31], the authors proposed a pipeline, namely LDC, to achieve the low-light image enhancement task based on a frequency-based decomposition and enhancement model. The model first filters out high-frequency features and learns to restore the remaining low-frequency features based on an amplification operation. Subsequently, high-frequency details are restored. The results from [31] indicate that state-of-the-art performance can be achieved by LDC.
58
+
59
+ Various research has also been done to improve the efficiency of low-light image enhancement in raw domain. To achieve a computationally fast low-light enhancement system, the authors in [14] proposed a lightweight architecture (RED) for extreme low-light image restoration. Besides, the authors also proposed an amplifier module to estimate the amplification factor based on the input raw image. In [6], a self-guided neural network (SGN) was proposed to achieve a balance between denoising performance and the computational cost. It aims at guiding the image restoration process at finer scales by utilizing the large-scale contextual information from shuffled multi-resolution inputs.
60
+
61
+ Methods discussed above generally learn to map raw data captured by the camera to the human-visual-ready image. As raw data provides full information, the reviewed approach achieves state-of-the-art performance. However, the performance of those methods is upper bounded by the information contained in the raw data. While in our work, we consider to introduce extra information beyond the raw-RGB data.
62
+
63
+ # 3. The Method
64
+
65
+ Motivated by the above discussion and inspired by the monochrome camera's high light sensitivity, we propose
66
+
67
+ a novel pipeline to further push the raw-based approaches forward. Specifically, our pipeline takes a raw image captured by a color camera with a Bayer-Filter as input. The De-Bayer-Filter module in our pipeline will first generate a monochrome image; a dual branch low-light enhancement module then fuses the monochrome raw data and color raw data to produce the final enhanced RGB image. Both modules work on raw images, as raw images are linearly dependent on the number of photons received, which contains additional information compared to RGB images such as the noise distribution [2, 20]. Details of each module will be discussed subsequently. A detailed architecture diagram of our framework is shown in Fig. 3(a) (more details are discussed in the supplementary). Furthermore, Fig. 3(b-f) and Fig. 3(g-k) visualize the output of each step of our model on our dataset and the SID dataset in [3], respectively.
68
+
69
+ # 3.1. De-Bayer-Filter Module
70
+
71
+ Millions of tiny light cavities are designed to collect photons and activate electrical signals on the camera sensor. However, using those light cavities alone can only produce gray images. A Bayer color filter is therefore designed to cover the light cavities and collect color information to produce color images. More specifically, a standard Bayer unit is a $2 \times 2$ pixel block with two green, one red and one blue color filters, and filters of a certain color will only allow photons with the corresponding wavelength to pass through.
72
+
73
+ Simulating the camera imaging process using neural networks has been demonstrated feasible in several works [3,20,22]. Inspired by those works, we consider the removal of the Bayer array filter virtually by modeling the relationship between input and output photons for each color filter. Specifically, a De-Bayer-Filter (DBF) module is designed in this work to restore the monochrome raw images $A_{mono} \in \mathbb{R}^{H \times W}$ from the input colored raw $A_{color} \in \mathbb{R}^{\frac{H}{2} \times \frac{W}{2} \times 4}$ :
74
+
75
+ $$
76
+ A _ {M o n o} = f _ {M} \left(A _ {C o l o r}\right) \tag {1}
77
+ $$
78
+
79
+ where $f_{M}(\cdot)$ is a U-net-based fully convolutional network (see Fig. 3). L1 distance between the ground-truth monochrome image $A_{Mono}^{GT}$ and predicted image $A_{Mono}$
80
+
81
+ ![](images/ab0970ce5f2e08ad947f972b8eb7a5ecfa47ef3787d1a55b03d5acfda18cb25d.jpg)
82
+
83
+ ![](images/6aebc6f8c92c288becfa0320736789297d9d764ce251ee3a51dff0edd9d0622c.jpg)
84
+
85
+ ![](images/4eef53747a4a2503a8d5ea65c61d513a00940fde47eb4ebe3a3fdb2811e5715d.jpg)
86
+ (a) Architecture of the pipeline
87
+
88
+ ![](images/f00e16fb95aaf43f717431df887a67af5d27182ec47b70004b17e79bbe16652f.jpg)
89
+
90
+ ![](images/b63a14a416c4e7478ba836779a8a3318ff0bddaf3118b6f61ce6e7653e58f8f1.jpg)
91
+
92
+ ![](images/65f89f4cc95cba47406cc05a097a02ff01195b7193ddb5bfa779d633bbaef6d1.jpg)
93
+
94
+ ![](images/67db215d7b8ae28307f0a4a0b2e55c9f6b486918caaf1d0e3889e9082d0a6d46.jpg)
95
+ (b) Input
96
+ (g) Input
97
+ Figure 3. (a) is the architecture of the pipeline. DBF module is designed to produce a monochrome image from the input raw image. DBLE module is proposed to fuse color and monochrome raw images to enhance the low-light input image. Each box denotes a multi-channel feature map produced by each layer. (b)-(f) are the images of our pipeline trained on our dataset. (g)-(k) are the images of our pipeline trained on SID [3] dataset; we convert RGB ground truth (GT) in SID dataset to gray image to replace the monochrome GT in our dataset.
98
+
99
+ ![](images/a2eec5f10a6eae2217c466b607fa31e6a6f5baf378d4e98a7e9a9e0a8cf8a0ba.jpg)
100
+ (c) Mono GT
101
+ (h) Synthetic Mono GT
102
+
103
+ ![](images/cdabd0f64dbc6e459aede4c357f7ccdbaad11e29518752ab3cea1b1c2037dd58.jpg)
104
+ (d) DBF Output
105
+ (i) DBF Output
106
+
107
+ ![](images/3223f8ca4d840e240845daf9bacd6cef968da4c7effed8a6c015cd4d7d4a1844.jpg)
108
+ (e) RGB GT
109
+ (j) RGB GT
110
+
111
+ ![](images/08b103ad273d966a1de7e180b799fe2f8304f3a6fbaa2b49dfc8651e88c5a7b6.jpg)
112
+ (f) DBLE Output
113
+ (k) DBLE Output
114
+
115
+ is used as a loss to encourage the DBF to learn to restore monochrome images with more details from low-light raw images. We hypothesize that the generated monochrome raw image can enhance the low-light image by introducing more information into the subsequent module.
116
+
117
+ # 3.2. Dual Branch Low-Light Image Enhancement Module
118
+
119
+ There are many differences between the colored raw image and monochrome image: 1) colored raw images have mosaic patterns; 2) the colored raw images consist of four channels with a resolution of $\frac{H}{2} \times \frac{W}{2}$ , while their counterparts consist of one channel with $H \times W$ resolution; 3)
120
+
121
+ no color information is included in the monochrome images; 4) better illuminating information is preserved on monochrome images as the monochrome camera sensor can better capture the light.
122
+
123
+ Based on the above observations, we propose a dual branch low-light image enhancement (DBLE) module (see Fig. 3), which treats the DBF generated monochrome raw image and colored raw image separately in the down-sampling process. Meanwhile, different level feature maps of the two down-sampling branches are fused based on concatenation and followed by channel-wise attention (CA) layer [8] in the up-sampling branch to synthesize the human-visual ready RGB image $I_{rgb} \in \mathbb{R}^{H \times W \times 3}$ . The
124
+
125
+ DBLE module is defined as:
126
+
127
+ $$
128
+ I _ {R G B} = f _ {C} \left(A _ {\text {C o l o r}}; A _ {\text {M o n o}}\right), \tag {2}
129
+ $$
130
+
131
+ where $f_{C}$ is a specifically designed fully convolutional network, which is shown in Fig. 3 (a). L1 distance between the ground truth RGB image $I_{RGB}^{GT}$ and predicted image $I_{RGB}$ is used as the loss to encourage the DBLE to learn to restore visual-ready RGB output from low-light raw images.
132
+
133
+ As the conventional U-net network treats features from each channel equally, directly concatenating the feature map from the monochrome raw branch and colored raw branch may lead to contradiction due to the domain gap. The usage of strided convolution and transposed convolution layers will also lead to spatial information loss. Motivated by [32], after the concatenation operation, a CA layer [8] is adopted to achieve a channel-wise attention recalibration in DBLE to bridge the gap between monochrome and color images. The CA layer can explicitly model the interaction of colored raw and monochrome raw modalities to exploit the complementariness and reduce contradiction from both domains.
134
+
135
+ It has been reported that upsampling layers (transposed convolutional layers) used in U-net causes images to be distorted by checkerboard artifacts [13, 19, 24, 25]. We also found such checkerboard artifacts in our settings on U-net, especially for images with white backgrounds. In our work, the CA layer also serves a role in avoiding checkerboard artifacts. As downscale and upscale operations are included in the CA layer, the CA layer is similar to the resize-convolution operation which discourages high-frequency artifacts in a weight-tying manner [19].
136
+
137
+ # 3.3. Dataset Design
138
+
139
+ Mono-Colored Raw Paired (MCR) Dataset. To the best of our knowledge, no existing dataset contains monochrome and Bayer raw image pairs captured by the same type of sensors. To establish the dataset, we capture image pairs of the same scenes with two cameras, denoted as Cam-Color and Cam-Mono<sup>3</sup>. Both cameras have the same 1/2-inch CMOS sensor and output a $1,280\mathrm{H}\times 1,024\mathrm{V}$ imaging pixel array. However, only Cam-Color is equipped with a Bayer color filter. Cam-Color is used to capture colored raw images in our work, and Cam-Mono captures monochrome raw images.
140
+
141
+ We collect the data in both indoor and outdoor conditions. The illuminance at the indoor scenes is between 50 lux and 2,000 lux under regular lights. The outdoor images were captured during daytime and night, under sun lighting or street lighting, with an illuminance between 900 lux and 14,000 lux. The captured scenes include toys, books, stationery objects, street views, and parks.
142
+
143
+ Table 1. Summary of the dataset
144
+
145
+ <table><tr><td>Scenes</td><td>Exposure time (s)</td><td>Data Pairs</td><td>Fixed Settings</td></tr><tr><td>Indoor fixed position</td><td>1/256, 1/128, 1/64, 1/32, 1/16, 1/8, 1/4, 3/8</td><td>2744 pairs</td><td rowspan="3">Format: .raw, resolution: 1280*1024</td></tr><tr><td>Indoor sliding platform</td><td>1/256, 1/128, 1/64, 1/32, 1/16, 1/8, 1/4, 3/8</td><td>800 pairs</td></tr><tr><td>Outdoor sliding platform</td><td>1/4096, 1/2048, 1/1024, 1/512, 1/256, 1/128, 1/64, 1/32</td><td>440 pairs</td></tr></table>
146
+
147
+ The cameras are mounted on the sliding platform on sturdy tripods or a fixed platform on a sturdy table. When mounted on the sliding platform, the camera is adjusted to the same position by sliding the platform to minimize the position displacement among images captured by two cameras in the same scene. When mounted on the fixed platform, the camera is attached to the same position as the platform to minimize the position displacement. Camera gain is set with the camera default value. Focal lengths are adjusted to maximize the quality of the images under long exposure. The exposure time is adjusted according to the specific scene environment.
148
+
149
+ Position displacement is unavoidable in the capture process. Hence, it is necessary to align the images captured from two cameras. The best exposure colored raw and monochrome raw is selected to align the images captured by two cameras in the same scenes. Then, homography feature matching is utilized to extract key points from the selected image pair, and a brute force matcher is utilized to find the matched key points. The extracted locations of good matches are filtered based on an empirical thresholding method. A homography matrix can be decided based on the filtered location of good matches. Finally, the homography transformation is applied to other images captured from the same scene. The statistic information of the dataset is summarized in Table 1. Fig. 2(a) demonstrates a series of monochrome-colored raw paired images from the dataset.
150
+
151
+ Artificial Mono-Colored Raw SID Dataset. The original SID dataset collected in [3] contains 5,094 raw short-exposure images taken from the indoor and outdoor environments, while each short-exposure image has a corresponding long-exposure reference image. The short exposure time is usually between 1/30 second and 1/10 second, and the exposure time of the corresponding long-exposure image is 10 to 30 seconds.
152
+
153
+ However, monochrome images are not available in the original SID dataset. To address this, we built an artificial Mono-colored raw dataset based on SID [3] dataset in this work. More specifically, we first convert the long-exposure raw images in the original SID dataset to RGB images, and these RGB images are further converted to grayscale by forming a weighted sum of the R, G, and B channels, as shown in Fig. 3(h). Such conversion can eliminate the hue and saturation information while retaining the luminance information.
154
+
155
+ ![](images/810ceb6d35df676a9d18a4bf60f960af8a783ebbc965ac9c40d3b6d2a131e7d5.jpg)
156
+ (a) Input raw
157
+
158
+ ![](images/7aaf360f6a9ff7f3cc21a082994c13c872c2f84d1f307fbde5e1d48c1aebc1f3.jpg)
159
+ (b) CSAIE
160
+
161
+ ![](images/2457e0d597f9fbd88f190d8899872bc1568ece7d4ad55c78b29049947e4fc890.jpg)
162
+
163
+ ![](images/ee39515e1b8044c715880deba2d9184f8f90f15db1185c64f6cf08ab2914ea74.jpg)
164
+
165
+ ![](images/308319e9965e199eee4d2dc3ea32ea91557b4c2d974b5dc32f6abb2d84c0700d.jpg)
166
+
167
+ ![](images/dbee6f3778b1b569fd4d563dc6ede19b7009d0bbb9102a8cc75794965c2d22a7.jpg)
168
+
169
+ ![](images/8f81d3e0019eeb5e185eee565cdf67ab583709074c37da7db01e5d8e03041ed3.jpg)
170
+
171
+ ![](images/2a3f5c6c415c69b4f21ac0c7c44dadfe5a6827292b41710337bfbef774f0d072.jpg)
172
+ (c) HE
173
+
174
+ ![](images/e8b6aefe5b9094ed1b1b87e89230ea5532e70df61f223eb4d9974e7159c8ca12.jpg)
175
+ (d) SGN [6]
176
+
177
+ ![](images/a8e9be97366ac3c34da7811a6b6404398081da9892dcf690e9585205d11f7eb6.jpg)
178
+ (e) DID [18]
179
+
180
+ ![](images/9f58e229f33c87c376da7d8c7a656e289ead7994ca56af3ad0626631275d4bc5.jpg)
181
+ (f) RED [14]
182
+
183
+ ![](images/d7cdf76387085d7c6ca0f70b859e176855e5a2c6cd3986c80ab08d8d9583c668.jpg)
184
+ (g) SID [3]
185
+
186
+ ![](images/bd781eb0e87764ac232ba67cebb4abe5d1016a2b1fcfb639984caa510c53d277.jpg)
187
+ (h) LDC [31]
188
+
189
+ ![](images/ab0a91485b9f5d48f6ada4d0e54223847948f5bdd4a220f5ea1262b8db8dad16.jpg)
190
+ (i) Ours
191
+
192
+ ![](images/995945318c5572558cb86923eb4cac5b7379a060e116aa91b4e95fcd2e059f3a.jpg)
193
+ (j) GT
194
+
195
+ ![](images/e0818c867fd6dbb506dc2eae10f55b3387a36325dda752eb713f83d4b3188fa6.jpg)
196
+ (A) Input raw
197
+ (F) RED
198
+ Figure 4. Visual results of state-of-the-art methods and ours on low-light images RAW in our dataset. The larger boxes show the zoom-in version of the regions in the smaller boxes of the same color. The 'CSAIE' means 'Commercial Software Automatic Image Enhancement'.
199
+
200
+ ![](images/4ac98661af38b2f012b85c1a7a7eb80ea7a95c64630a42c8017299a61d6578f3.jpg)
201
+ (B) CSAIE
202
+ (G) SID [3]
203
+
204
+ ![](images/3121186b5c974b44004338155379d3be614d7569f75a78a10e10e6aced3d86c3.jpg)
205
+ (C) HE
206
+ (H) LDC [31]
207
+
208
+ ![](images/888982965f4a37e2ec4b3e6ec6a5ab997dd7088f708caac0aa326e6ec176f79f.jpg)
209
+ (D) SGN [6]
210
+ (I) Ours
211
+
212
+ ![](images/de432d8b1a242aeba45078abefbb626d3b2e3b959654c5e67e02a00004aee343.jpg)
213
+ (E) DID [18]
214
+ (J) GT
215
+
216
+ # 3.4. Training
217
+
218
+ By default, we pre-process the input images similarly to [3] where images' pixel values are amplified with predefined ratios followed by a pack raw operation. We incorporate the CA layer [8] to bridge the domain gap between features from monochrome and colored raw images. The whole system is trained jointly with L1 loss to directly output the corresponding long-exposure monochrome and sRGB images.
219
+
220
+ The dataset is split into train and test sets without overlapping by the ratio of 9:1. The input patches are randomly cropped from the original images with $512 \times 512$ . In the case of raw image input, the RGGB pixel position is carefully preserved in the cropping process. We implement our model with Pytorch 1.7 on the RTX 3090 GPU platform, and we train the networks from scratch using the Adam [12] optimizer. The learning rate was set to $10^{-4}$ and $10^{-5}$ after converging, and the weight-decay was set to 0.
221
+
222
+ # 4. Experiments and Results
223
+
224
+ In this section, we present a comprehensive performance evaluation of the proposed low-light image enhancement system. To measure the performance, we evaluate the system performance in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). For PSNR and SSIM, a higher value means a better similarity between output image and ground truth.
225
+
226
+ # 4.1. Comparison with State-of-the-Art Methods
227
+
228
+ Qualitative Comparison. We first visually compare the results of the proposed method with other state-of-the-art deep learning-based image enhancement methods, including SID [3], DID [18],SGN [6],LDC [31],and RED [14]. In addition, the traditional histogram equalization (HE) approach and a Commercial Software Automatic Image Enhancement (CSAIE) method are also included in the com
229
+
230
+ Table 2. Comparison with SOTA.
231
+
232
+ <table><tr><td></td><td colspan="2">MCR Dataset</td><td colspan="2">SID Dataset</td></tr><tr><td></td><td>PSNR (dB)</td><td>SSIM</td><td>PSNR (dB)</td><td>SSIM</td></tr><tr><td>RED [14] (21,CVPR)</td><td>25.74</td><td>0.851</td><td>28.66</td><td>0.790</td></tr><tr><td>SGN [6] (19,ICCV)</td><td>26.29</td><td>0.882</td><td>28.91</td><td>0.789</td></tr><tr><td>DID [18] (19,ICME)</td><td>26.16</td><td>0.888</td><td>28.41</td><td>0.780</td></tr><tr><td>SID [3] (18,CVPR)</td><td>29.00</td><td>0.906</td><td>28.88</td><td>0.787</td></tr><tr><td>LDC [31] (20,CVPR)</td><td>29.36</td><td>0.904</td><td>29.56</td><td>0.799</td></tr><tr><td>Ours</td><td>31.69</td><td>0.908</td><td>29.65</td><td>0.797</td></tr></table>
233
+
234
+ parison. Fig. 4 shows the results of different methods on two low-light images (see more results in supplementary).
235
+
236
+ As indicated by Fig. 4, our method can achieve better enhancement and denoising visual performance. Specifically, checkerboard artifacts are usually found on SID for images with white background. This is because of the usage of upsampling layers in the model. Foggy artifacts are usually observed on SGN; color distortions also are found on SGN, DID, and RED, as are shown in Fig. 4 (A-J), where the green plant enclosed by the yellow box becomes black after restoring by SGN, DID, and RED. Compared to LDC, our methods can preserve more details as over-smoothing is usually found on LDC. Note that over-smoothing may be more visual appealing, but details will be lost, for example, the wall crack becomes invisible on LDC as shown in Fig. 4 (H-I). In a nutshell, Fig. 4 demonstrates the satisfying visual performance achieved by our method, with fewer artifacts but more convincing restoration.
237
+
238
+ Quantitative Comparison. A quantitative comparison against the state-of-the-art enhancement methods has also been performed. For a fair comparison, SID [3], DID [18], SGN [6], LDC [31], and RED [14] were trained on the MCR dataset.
239
+
240
+ As Table 2 shows, our proposed method outperforms its counterparts by a large margin. Specifically, our method can achieve a PSNR of 31.69dB on MCR dataset, which is $7.9\%$ higher than the second-best method, i.e., the LDC [31]. Our method can also achieve an SSIM of 0.908, which is the highest among all compared methods.
241
+
242
+ Compared to other methods, we incorporate the extra monochrome information into the processing pipeline, hence state-of-the-art performance can be achieved. As shown in the first two data rows in Table 2, both RED [14] and SGN [6] can only achieve a PSNR of around $26\mathrm{dB}$ . Both RED and SGN aim at reducing the computational cost and improving efficiency. Hence it is reasonable to observe the performance degradation. The result on DID [18] from Table 2 suggests that replacing U-net with residual learning cannot achieve superior performance on our dataset.
243
+
244
+ On the MCR dataset, SID [3] achieves a PSNR of only 29.00dB. The checkerboard artifact may be the reason. From Table 2, we observe that LDC [31] achieves the second-best performance. This is because they are based on
245
+
246
+ a frequency-based decomposition and enhancement model, which can better restore the noisy image and avoid noise amplification. We also train our model on the modified SID dataset to further validate our method for a fair comparison. The performance results are shown in the SID column in Table 2. As the results suggest, our method also outperforms all its counterparts. Specifically, our method can achieve a PSNR of $29.65\mathrm{dB}$ , which is around 0.1dB higher than LDC, while the SSIM can achieve similar performance.
247
+
248
+ Other methods including SID, DID,SGN,and RED can only achieve a PSNR around 28dB. In summary, the results show that our model is more effective in enhancing lowlight images with noise. The performance of most existing methods is upper bounded by the information contained in the raw data. In our proposed pipeline, we further extend the upper bound by considering the monochrome domain. Hence, better performance can be achieved.
249
+
250
+ # 4.2. Ablation Studies
251
+
252
+ In this subsection, we provide several ablation studies for the proposed system to better demonstrate the effectiveness of each module of our system.
253
+
254
+ Checkerboard artifacts are found in our preliminary exploration stage, especially for images with white backgrounds. To eliminate checkerboard artifacts, we incorporate the CA layer [8] in the DBLE module. In this ablation study, we first remove the CA layer in the DBLE module to demonstrate the checkerboard artifacts' elimination and performance upgrading. Besides, we also train an original SID [3] network on our dataset to show the visual effect of the checkerboard artifacts of U-net. The restored images from SID, DBLE without CA layer, and DBLE with CA layer are shown in Fig. 5. It is observed that checkerboard artifacts can be perfectly avoided by introducing the CA layer. Besides, as per the quantitative results shown in Table 3, CA layer can boost the image enhancement performance as the PSNR increases to $31.69\mathrm{dB}$ compared with its counterpart of $29.23\mathrm{dB}$ .
255
+
256
+ We also train the model to learn the ratio directly instead of amplifying image pixel values with predefined ratios. Hence, we train a model without amplifying the input raw images with the predefined ratio. As a result, as shown in Table 3, such a model can still achieve comparable performance, with only a slight decrease in PSNR and SSIM.
257
+
258
+ As suggested by [3], we change the packraw-based input into original one-channel raw images. As shown in the row of baseline without packraw in Table 3, PSNR and SSIM degradation is observed. We argue that the packing of raw can assist the model to better process the color information.
259
+
260
+ The change of loss function from L1 to L2 cannot achieve better performance, as shown in Table 3. We also try to change the input raw into sRGB format. The result in the sRGB row from Table 3 shows a significant perfor
261
+
262
+ ![](images/64539c4d02c42e93e224ab70726c0c7bd15611cf641d8794c8ab966908050879.jpg)
263
+ (a) GT
264
+
265
+ ![](images/f84434afa637b08dfe53dbac6f2fa204fcad95d65e4fb58ea7bdbde63472a86b.jpg)
266
+ (b) SID [3]
267
+
268
+ ![](images/afa1df0c7f8332c68dbfb2bf8a2ffc61112f71ba3e24985ca7b53544098a53c5.jpg)
269
+ (c) Ours w/o CA [8]
270
+ Figure 5. Visual demonstration of checkerboard artifacts under different settings.
271
+
272
+ ![](images/0ac53dd99007d7bff00ff55e14833b16e9c03ae3e1201cb7487ea806a24ccf6e.jpg)
273
+ (d) Ours with CA [8]
274
+
275
+ Table 3. Ablation study on the MCR dataset.
276
+
277
+ <table><tr><td></td><td colspan="2">DBF</td><td colspan="2">DBLE</td></tr><tr><td></td><td>PSNR (dB)</td><td>SSIM</td><td>PSNR (dB)</td><td>SSIM</td></tr><tr><td>Baseline</td><td>21.0607</td><td>0.8254</td><td>31.6905</td><td>0.9083</td></tr><tr><td>Baseline wo CA [8]</td><td>20.2673</td><td>0.7948</td><td>29.2350</td><td>0.8732</td></tr><tr><td>Baseline wo ratio</td><td>19.8978</td><td>0.7868</td><td>29.3528</td><td>0.8878</td></tr><tr><td>Baseline wo packraw</td><td>20.7846</td><td>0.8034</td><td>28.8728</td><td>0.8657</td></tr><tr><td>Baseline l1→l2</td><td>20.4587</td><td>0.8016</td><td>30.2359</td><td>0.8974</td></tr><tr><td>Baseline w/o DBF</td><td>-</td><td>-</td><td>29.9946</td><td>0.8839</td></tr><tr><td>Baseline raw→sRGB</td><td>18.2369</td><td>0.7625</td><td>27.3521</td><td>0.8295</td></tr></table>
278
+
279
+ mance drop, which is consistent with other works [3, 31].
280
+
281
+ The DBF module plays a key role in our system in generating the monochrome images, which assist the DBLE module in restoring the low-light images into monitor-ready sRGB images. We also explore the performance of a model without DBF module and the monochrome branch. As the results in Table 3 show, the performance drops to $29.99\mathrm{dB} / 0.883$ in terms of PSNR/SSIM when the DBF module is removed, hence providing a solid validation of the DBF's effectiveness.
282
+
283
+ # 5. Limitations and Future Work
284
+
285
+ There are various aspects to improve in the future. The cameras we adopted in this work can only output 8-bit raw images, the 16-bit cameras will be used to collect data in the future to cover more diverse scenes and objects. Besides, the network complexity needs to be more light-weighted to deploy the proposed system in the real world. Extending the proposed work to videos will also be one future direction. We hope the work presented in this paper can provide preliminary explorations for low-light image enhancement research in community and industry. When it comes to some extremely dark images on our MCR Dataset, the existing low-light image enhancement algorithms (SID [3], LDC [31], and ours) show unsatisfactory results sometimes. The restored images usually lost the high-frequency edge information compared to the ground truth image and became blurred (see in supplementary). Extremely dark settings sometimes yield quite weak signals in each color chan
286
+
287
+ nel, leading to those color artifacts that commonly exist in both SoTA and our methods and require further study.
288
+
289
+ # 6. Conclusion
290
+
291
+ Removing the Bayer-filter allows more photos to be captured by the sensor. Motivated by this fact, this work proposes an end-to-end fully convolutional network consisting of a DBF module and a dual branch low-light enhancement module to achieve low-light image enhancement on a single colored camera system. The DBF module is devised to predict the corresponding monochrome raw image from the color camera raw data input. The DBLE is designed to restore the low-light raw images based on the raw input and the DBF-predicted monochrome raw images. DBLE treats the colored raw and monochrome raw separately by using a dual branch network architecture. In the DBLE upsampling stream, features from both monochrome raw and colored raw are fused together and a channel-wise attention is applied to the fused features.
292
+
293
+ We also propose a Mono-Colored Raw paired dataset (MCR) which includes color and monochrome raw image pairs collected by a color camera with Bayer-Filter and a monochrome camera without Bayer-Filter. The dataset is collected in various scenes, and each colored raw image has a corresponding monochrome raw image captured with the same exposure settings. To better show our superiority, the SID dataset is also adopted in the evaluation. Gray image is generated from the corresponding ground truth color image in the SID dataset to serve as the monochrome image. Subsequently, a model is trained on the modified dataset to verify the performance.
294
+
295
+ Our experiments prove that significant performance can be achieved by leveraging raw sensor data and data-driven learning. Our method can overcome the checkerboard artifact which is found on U-net, while preserving the visual quality. Our quantitative experiments indicate that our methods can achieve the state-of-the-art performance: a PSNR of 31.69dB on our own dataset, and 29.65dB on the SID dataset.
296
+
297
+ # References
298
+
299
+ [1] Tarik Arici, Salih Dikbas, and Yucel Altunbasak. A histogram modification framework and its application for image contrast enhancement. IEEE Transactions on image processing, 18(9):1921-1935, 2009. 2
300
+ [2] Chen Chen, Qifeng Chen, Minh N Do, and Vladlen Koltun. Seeing motion in the dark. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 3185-3194, 2019. 3
301
+ [3] Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. Learning to see in the dark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3291-3300, 2018. 1, 2, 3, 4, 5, 6, 7, 8
302
+ [4] Xuan Dong, Guan Wang, Yi Pang, Weixin Li, Jiangtao Wen, Wei Meng, and Yao Lu. Fast efficient algorithm for enhancement of low lighting video. In 2011 IEEE International Conference on Multimedia and Expo (ICME), pages 1-6. IEEE, 2011. 2
303
+ [5] Minhao Fan, Wenjing Wang, Wenhan Yang, and Jiaying Liu. Integrating semantic segmentation and retina model for low-light image enhancement. In Proceedings of the 28th ACM International Conference on Multimedia (ACMMM), pages 2317-2325, 2020. 2
304
+ [6] Shuhang Gu, Yawei Li, Luc Van Gool, and Radu Timofte. Self-guided network for fast image denoising. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 2511-2520, 2019. 3, 6, 7
305
+ [7] Chunle Guo, Chongyi Li, Jichang Guo, Chen Change Loy, Junhui Hou, Sam Kwong, and Runmin Cong. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1780-1789, 2020. 2
306
+ [8] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7132-7141, 2018. 4, 5, 6, 7, 8
307
+ [9] Haiyang Jiang and Yinqiang Zheng. Learning to see moving objects in the dark. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 7324-7333, 2019. 2, 3
308
+ [10] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision, pages 694–711. Springer, 2016. 2
309
+ [11] Guisik Kim, Dokyeong Kwon, and Junseok Kwon. Low-lightgan: Low-light enhancement via advanced generative adversarial network with task-driven training. In 2019 IEEE International Conference on Image Processing (ICIP), pages 2811-2815. IEEE, 2019. 2
310
+ [12] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 6
311
+ [13] Yuma Kinoshita and Hitoshi Kiya. Fixed smooth convolutional layer for avoiding checkerboard artifacts in cnns. In ICASSP 2020-2020 IEEE International Conference on
312
+
313
+ Acoustics, Speech and Signal Processing (ICASSP), pages 3712-3716. IEEE, 2020. 5
314
+ [14] Mohit Lamba and Kaushik Mitra. Restoring extremely dark images in real time. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3487-3497, 2021. 3, 6, 7
315
+ [15] Chulwoo Lee, Chul Lee, and Chang-Su Kim. Contrast enhancement based on layered difference representation of 2d histograms. IEEE Transactions on Image Processing, 22(12):5372-5384, 2013. 2
316
+ [16] Mading Li, Xiaolin Wu, Jiaying Liu, and Zongming Guo. Restoration of unevenly illuminated images. In 2018 25th IEEE International Conference on Image Processing (ICIP), pages 1118-1122. IEEE, 2018. 2
317
+ [17] Zhetong Liang, Weijian Liu, and Ruohy Yao. Contrast enhancement by nonlinear diffusion filtering. IEEE Transactions on Image Processing, 25(2):673-686, 2015. 2
318
+ [18] Paras Maharjan, Li Li, Zhu Li, Ning Xu, Chongyang Ma, and Yue Li. Improving extreme low-light image denoising via residual learning. In 2019 IEEE International Conference on Multimedia and Expo (ICME), pages 916-921. IEEE, 2019. 3, 6, 7
319
+ [19] Augustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts. Distill, 1(10):e3, 2016.5
320
+ [20] Hao Ouyang, Zifan Shi, Chenyang Lei, Ka Lung Law, and Qifeng Chen. Neural camera simulators. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7700-7709, 2021. 3
321
+ [21] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted intervention, pages 234-241. Springer, 2015. 2
322
+ [22] Eli Schwartz, Raja Giryes, and Alex M Bronstein. Deepisp: Toward learning an end-to-end image processing pipeline. IEEE Transactions on Image Processing, 28(2):912-923, 2018. 1, 2, 3
323
+ [23] Haonan Su and Cheolkon Jung. Low light image enhancement based on two-step noise suppression. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1977-1981. IEEE, 2017. 2
324
+ [24] Yusuke Sugawara, Sayaka Shiota, and Hitoshi Kiya. Superresolution using convolutional neural networks without any checkerboard artifacts. In 2018 25th IEEE International Conference on Image Processing (ICIP), pages 66-70. IEEE, 2018. 5
325
+ [25] Yusuke Sugawara, Sayaka Shiota, and Hitoshi Kiya. Checkerboard artifacts free convolutional neural networks. APSIPA Transactions on Signal and Information Processing, 8, 2019. 5
326
+ [26] Ruixing Wang, Qing Zhang, Chi-Wing Fu, Xiaoyong Shen, Wei-Shi Zheng, and Jiaya Jia. Underexposed photo enhancement using deep illumination estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6849-6857, 2019. 2
327
+ [27] Yuanchen Wang, Xiaonan Zhu, Yuong Zhao, Ping Wang, and Jiquan Ma. Enhancement of low-light image based on
328
+
329
+ wavelet u-net. In Journal of Physics: Conference Series, volume 1345, page 022030. IOP Publishing, 2019. 2
330
+ [28] Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. Deep retina decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560, 2018. 2
331
+ [29] Xiaomeng Wu, Xinhao Liu, Kaoru Hiramatsu, and Kunio Kashino. Contrast-accumulated histogram equalization for image enhancement. In 2017 IEEE international conference on image processing (ICIP), pages 3190–3194. IEEE, 2017. 2
332
+ [30] Ke Xu, Xin Yang, Baocai Yin, and Rynson WH Lau. Learning to restore low-light images via decomposition-and-enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2281-2290, 2020. 2
333
+ [31] Ke Xu, Xin Yang, Baocai Yin, and Rynson WH Lau. Learning to restore low-light images via decomposition-and-enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2281-2290, 2020. 3, 6, 7, 8
334
+ [32] Lu Zhang, Zhiyong Liu, Shifeng Zhang, Xu Yang, Hong Qiao, Kaizhu Huang, and Amir Hussain. Cross-modality interactive attention network for multispectral pedestrian detection. Information Fusion, 50:20-29, 2019. 5
335
+ [33] Yonghua Zhang, Jiawan Zhang, and Xiaojie Guo. Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM international conference on multimedia (ACMMM), pages 1632-1640, 2019. 2
336
+ [34] Minfeng Zhu, Pingbo Pan, Wei Chen, and Yi Yang. Eemefn: Low-light image enhancement via edge-enhanced multi-exposure fusion network. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 13106-13113, 2020. 2
abandoningthebayerfiltertoseeinthedark/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fadb408711b0aa6488d6f0af099b5c9a39ef61f8ac1880d2f71bbf634006068c
3
+ size 550259
abandoningthebayerfiltertoseeinthedark/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9c9fde872764aa664df93776880c26a2927e4ed80b887034557147ed797298b
3
+ size 375247
abodatasetandbenchmarksforrealworld3dobjectunderstanding/165b1a6d-d340-46a6-8a6b-371c91127e9b_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5df04da9cb5a25d8bb8f985d9b38ba2fd9e9690f7fd98f08533310c4a93a70c7
3
+ size 73706
abodatasetandbenchmarksforrealworld3dobjectunderstanding/165b1a6d-d340-46a6-8a6b-371c91127e9b_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4bc18c05efb1a30e9d52c927843670766bf6d741f3fee538f11b242f891471fd
3
+ size 93465
abodatasetandbenchmarksforrealworld3dobjectunderstanding/165b1a6d-d340-46a6-8a6b-371c91127e9b_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89789c736c71cc25a75731f8dd2f555c41815591208f6d65e8afba8243bc4e9e
3
+ size 6417627
abodatasetandbenchmarksforrealworld3dobjectunderstanding/full.md ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ABO: Dataset and Benchmarks for Real-World 3D Object Understanding
2
+
3
+ Jasmine Collins<sup>1</sup>, Shubham Goel<sup>1</sup>, Kenan Deng<sup>2</sup>, Achleshwar Luthra<sup>3</sup>, Leon Xu<sup>1,2</sup>, Erhan Gundogdu<sup>2</sup>, Xi Zhang<sup>2</sup>, Tomas F. Yago Vicente<sup>2</sup>, Thomas Dideriksen<sup>2</sup>, Himanshu Arora<sup>2</sup>, Matthieu Guillaumin<sup>2</sup>, and Jitendra Malik<sup>1</sup>
4
+
5
+ UC Berkeley, 2 Amazon, 3 BITS Pilani
6
+
7
+ ![](images/a607b822c22f3805cb0b5410b8dac6db5718957e3ac10f2a3fe2936f36b21ade.jpg)
8
+ Figure 1. ABO is a dataset of product images and realistic, high-resolution, physically-based 3D models of household objects. We use ABO to benchmark the performance of state-of-the-art methods on a variety of realistic object understanding tasks.
9
+
10
+ ![](images/f40075b894e735d382b2501316db33d1e309c093cfaf204a4696a2a5bcd252f9.jpg)
11
+
12
+ # Abstract
13
+
14
+ We introduce Amazon Berkeley Objects (ABO), a new large-scale dataset designed to help bridge the gap between real and virtual 3D worlds. ABO contains product catalog images, metadata, and artist-created 3D models with complex geometries and physically-based materials that correspond to real, household objects. We derive challenging benchmarks that exploit the unique properties of ABO and measure the current limits of the state-of-the-art on three open problems for real-world 3D object understanding: single-view 3D reconstruction, material estimation, and cross-domain multi-view object retrieval.
15
+
16
+ # 1. Introduction
17
+
18
+ Progress in 2D image recognition has been driven by large-scale datasets [15,26,37,43,56]. The ease of collecting 2D annotations (such as class labels or segmentation masks) has led to the large scale of these diverse, in-the-wild datasets, which in turn has enabled the development of 2D computer vision systems that work in the real world.
19
+
20
+ Theoretically, progress in 3D computer vision should follow from equally large-scale datasets of 3D objects. However, collecting large amounts of high-quality 3D annotations (such as voxels or meshes) for individual real-world objects poses a challenge. One way around the challenging problem of getting 3D annotations for real images is to focus only on synthetic, computer-aided design (CAD) models [10, 35, 70]. This has the advantage that the data is large in scale (as there are many 3D CAD models available for download online) but many of the models are low quality or untextured and do not exist in the real world. This has led to a variety of 3D reconstruction methods that work well on clear-background renderings of synthetic objects [13, 24, 46, 65] but do not necessarily generalize to real images, new categories, or more complex object geometries [5, 6, 58].
21
+
22
+ To enable better real-world transfer, another class of 3D datasets aims to link existing 3D models with real-world images [63, 64]. These datasets find the closest matching CAD models for the objects in an image and have human annotators align the pose of each model to best match the
23
+
24
+ image. While this has enabled the evaluation of 3D reconstruction methods in-the-wild, the shape (and thus pose) matches are approximate. Further, because this approach relies on matching CAD models to images, it inherits the limitations of the existing CAD model datasets (i.e. poor coverage of real-world objects, basic geometries and textures).
25
+
26
+ The IKEA [41] and Pix3D [57] datasets sought to improve upon this by annotating real images with exact, pixel-aligned 3D models. The exact nature of such datasets has allowed them to be used as training data for single-view reconstruction [21] and has bridged some of the synthetic-to-real domain gap. However, the size of the datasets are relatively small (90 and 395 unique 3D models, respectively), likely due to the difficulty of finding images that exactly match 3D models. Further, the larger of the two datasets [57] only contains 9 categories of objects. The provided 3D models are also untextured, thus the annotations in these datasets are typically used for shape or pose-based tasks, rather than tasks such as material prediction.
27
+
28
+ Rather than trying to match images to synthetic 3D models, another approach to collecting 3D datasets is to start with real images (or video) and reconstruct the scene by classical reconstruction techniques such as structure from motion, multi-view stereo and texture mapping [12, 54, 55]. The benefit of these methods is that the reconstructed geometry faithfully represents an object of the real world. However, the collection process requires a great deal of manual effort and thus datasets of this nature tend to also be quite small (398, 125, and 1032 unique 3D models, respectively). The objects are also typically imaged in a controlled lab setting and do not have corresponding real images of the object "in context". Further, included textured surfaces are assumed to be Lambertian and thus do not display realistic reflectance properties.
29
+
30
+ Motivated by the lack of large-scale datasets with realistic 3D objects from a diverse set of categories and corresponding real-world multi-view images, we introduce Amazon Berkeley Objects (ABO). This dataset is derived from Amazon.com product listings, and as a result, contains imagery and 3D models that correspond to modern, real-world, household items. Overall, ABO contains 147,702 product listings associated with 398,212 unique catalog images, and up to 18 unique metadata attributes (category, color, material, weight, dimensions, etc.) per product. ABO also includes "360° View" turntable-style images for 8,222 products and 7,953 products with corresponding artist-designed 3D meshes. In contrast to existing 3D computer vision datasets, the 3D models in ABO have complex geometries and high-resolution, physically-based materials that allow for photorealistic rendering. A sample of the kinds of real-world images associated with a 3D model from ABO can be found in Figure 1, and sam
31
+
32
+ <table><tr><td>Dataset</td><td># Models</td><td># Classes</td><td>Real images</td><td>Full 3D</td><td>PBR</td></tr><tr><td>ShapeNet [10]</td><td>51.3K</td><td>55</td><td>X</td><td>✓</td><td>X</td></tr><tr><td>3D-Future [19]</td><td>16.6K</td><td>8</td><td>X</td><td>✓</td><td>X</td></tr><tr><td>Google Scans [54]</td><td>1K</td><td>-</td><td>X</td><td>✓</td><td>X</td></tr><tr><td>CO3D [53]</td><td>18.6K</td><td>50</td><td>✓</td><td>X</td><td>X</td></tr><tr><td>IKEA [42]</td><td>219</td><td>11</td><td>✓</td><td>✓</td><td>X</td></tr><tr><td>Pix3D [57]</td><td>395</td><td>9</td><td>✓</td><td>✓</td><td>X</td></tr><tr><td>PhotoShape [51]</td><td>5.8K</td><td>1</td><td>X</td><td>✓</td><td>✓</td></tr><tr><td>ABO (Ours)</td><td>8K</td><td>63</td><td>✓</td><td>✓</td><td>✓</td></tr></table>
33
+
34
+ Table 1. A comparison of the 3D models in ABO and other commonly used object-centric 3D datasets. ABO contains nearly 8K 3D models with physically-based rendering (PBR) materials and corresponding real-world catalog images.
35
+
36
+ ![](images/48b9fe0e71a7fa23cce8fb1ddd3fd6b8209fc6338f1fb23fc0edb85a20ce8cbf.jpg)
37
+ Figure 2. Posed 3D models in catalog images. We use instance masks to automatically generate 6-DOF pose annotations.
38
+
39
+ ple metadata attributes are shown in Figure 3. The dataset is released under CC BY-NC 4.0 license and can be downloaded at https://amazon-berkeley-objects.s3. amazonaws.com/index.html.
40
+
41
+ To facilitate future research, we benchmark the performance of various methods on three computer vision tasks that can benefit from more realistic 3D datasets: (i) single-view shape reconstruction, where we measure the domain gap for networks trained on synthetic objects, (ii) material estimation, where we introduce a baseline for spatially-varying BRDF from single- and multi-view images of complex real world objects, and (iii) image-based multi-view object retrieval, where we leverage the 3D nature of ABO to evaluate the robustness of deep metric learning algorithms to object viewpoint and scenes.
42
+
43
+ # 2. Related Work
44
+
45
+ 3D Object Datasets ShapeNet [10] is a large-scale database of synthetic 3D CAD models commonly used for training single- and multi-view reconstruction models. IKEA Objects [42] and Pix3D [57] are image collections with 2D-3D alignment between CAD models and real images, however these images are limited to objects for which there is an exact CAD model match. Similarly, Pascal3D+ [64] and ObjectNet3D [63] provide 2D-3D alignment for images and provide more instances and categories, however
46
+
47
+ <table><tr><td rowspan="2">Benchmark</td><td rowspan="2">Domain</td><td rowspan="2">Classes</td><td colspan="3">Instances</td><td colspan="4">Images</td><td rowspan="2">Structure</td><td rowspan="2">Recall@1</td></tr><tr><td>train</td><td>val</td><td>test</td><td>train</td><td>val</td><td>test-target</td><td>test-query</td></tr><tr><td>CUB-200-2011</td><td>Birds</td><td>200</td><td>-</td><td>-</td><td>-</td><td>5994</td><td>0</td><td>-</td><td>5794</td><td>15 parts</td><td>79.2% [30]</td></tr><tr><td>Cars-196</td><td>Cars</td><td>196</td><td>-</td><td>-</td><td>-</td><td>8144</td><td>0</td><td>-</td><td>8041</td><td>-</td><td>94.8% [30]</td></tr><tr><td>In-Shop</td><td>Clothes</td><td>25</td><td>3997</td><td>0</td><td>3985</td><td>25882</td><td>0</td><td>12612</td><td>14218</td><td>Landmarks, poses, masks</td><td>92.6% [33]</td></tr><tr><td>SOP</td><td>Ebay</td><td>12</td><td>11318</td><td>0</td><td>11316</td><td>59551</td><td>0</td><td>-</td><td>60502</td><td>-</td><td>84.2% [30]</td></tr><tr><td>ABO (MVR)</td><td>Amazon</td><td>562</td><td>49066</td><td>854</td><td>836</td><td>298840</td><td>26235</td><td>4313</td><td>23328</td><td>Subset with 3D models</td><td>30.0%</td></tr></table>
48
+
49
+ Table 2. Common image retrieval benchmarks for deep metric learning and their statistics. Our proposed multi-view retrieval (MVR) benchmark based on ABO is significantly larger, more diverse and challenging than existing benchmarks, and exploits 3D models.
50
+
51
+ ![](images/709cab82e55696abb77b3c3a91783737e47d112f5664225f504da3b06b198817.jpg)
52
+ Figure 3. Sample catalog images and attributes that accompany ABO objects. Each object has up to 18 attribute annotations.
53
+
54
+ ![](images/6b3db90ad3e0b943bf54cabe8274e150835b7117577577f9e638db3433880b15.jpg)
55
+ Figure 4. 3D model categories. Each category is also mapped to a synset in the WordNet hierarchy. Note the y-axis is in log scale.
56
+
57
+ the 3D annotations are only approximate matches. The Object Scans dataset [12] and Objectron [3] are both video datasets that have the camera operator walk around various objects, but are limited in the number of categories represented. CO3D [53] also offers videos of common objects from 50 different categories, however they do not provide full 3D mesh reconstructions.
58
+
59
+ Existing 3D datasets typically assume very simplistic texture models that are not physically realistic. To improve on this, PhotoShapes [51] augmented ShapeNet CAD models by automatically mapping spatially varying (SV-) bidirectional reflectance distribution functions (BRDFs) to meshes, yet the dataset consists only of chairs. The works in [17, 20] provide high-quality SV-BRDF maps, but only for planar surfaces. The dataset used in [32] contains only homogenous BRDFs for various objects. [40] and [7] introduce datasets containing full SV-BRDFs, however their
60
+
61
+ models are procedurally generated shapes that do not correspond to real objects. In contrast, ABO provides shapes and SV-BRDFs created by professional artists for real-life objects that can be directly used for photorealistic rendering.
62
+
63
+ Table 1 compares the 3D subset of ABO with other commonly used 3D datasets in terms of size (number of objects and classes) and properties such as the presence of real images, full 3D meshes and physically-based rendering (PBR) materials. ABO is the only dataset that contains all of these properties and is much more diverse in number of categories than existing 3D datasets.
64
+
65
+ 3D Shape Reconstruction Recent methods for single-view 3D reconstruction differ mainly in the type of supervision and 3D representation used, whether it be voxels, point clouds, meshes, or implicit functions. Methods that require full shape supervision in the single-view [18, 22, 46, 57, 69] and multi-view [13, 31, 65] case are often trained using ShapeNet. There are other approaches that use more natural forms of multi-view supervision such as images, depth maps, and silhouettes [31, 59, 62, 66], with known cameras. Of course, multi-view 3D reconstruction has long been studied with classical computer vision techniques [27] like multi-view stereo and visual hull reconstruction. Learning-based methods are typically trained in a category-specific way and evaluated on new instances from the same category. Out of the works mentioned, only [69] claims to be category-agnostic. In this work we are interested in how well these ShapeNet-trained networks [13, 22, 46, 69] generalize to more realistic objects.
66
+
67
+ Material Estimation Several works have focused on modeling object appearance from a single image, however realistic datasets available for this task are relatively scarce and small in size. [38] use two networks to estimate a homogeneous BRDF and an SV-BRDF of a flat surface from a single image, using a self-augmentation scheme to alleviate the need for a large training set. However, their work is limited to a specific family of materials, and each separate material requires another trained network. [67] extend the idea of self-augmentation to train with unlabeled data, but their work is limited by the same constraints. [16] use a modified U-Net and rendering loss to predict the SVBRDFs of flash-lit photographs consisting of only a flat sur
68
+
69
+ ![](images/32491d64102454f75b8a2dfe9a08f9d564c01993e76d997c37127c7257b43409.jpg)
70
+ Figure 5. Qualitative 3D reconstruction results for R2N2, Occupancy Networks, GenRe, and Mesh-RCNN on ABO. All methods are pre-trained on ShapeNet and show a decrease in performance on objects from ABO.
71
+
72
+ face. To enable prediction for arbitrary shapes, [40] propose a cascaded CNN architecture with a single encoder and separate decoder for each SV-BRDF parameter. While the method achieves good results on semi-uncontrolled lighting environments, it requires using the intermediate bounces of global illumination rendering as supervision. More recent works have turned towards using multiple images to improve SV-BRDF estimation, but still only with simplistic object geometries. For instance, [17] and [20] use multiple input images with a flash lit light source, but only for a single planar surface. [7] and [8] both use procedurally generated shapes to estimate SV-BRDFs from multi-view images. ABO addresses the lack of sufficient realistic data for material estimation, and in this work we propose a simple baseline method that can estimate materials from single or multi-view images of complex, real-world shapes.
73
+
74
+ 2D/3D Image Retrieval Learning to represent 3D shapes and natural images of products in a single embedding space has been tackled by [39]. They consider various relevant tasks, including cross-view image retrieval, shape-based image retrieval and image-based shape retrieval, but all are inherently constrained by the limitations of ShapeNet [10] (cross-view image retrieval is only considered for chairs and cars). [36] introduced 3D object representations for fine-grained recognition and a dataset of cars with real-world 2D imagery (CARS-196), which is now widely used for deep metric learning (DML) evaluation. Likewise, other datasets for DML focus on instances/fine categories of few object types, such as birds [60], clothes [44], or a few object categories [50].
75
+
76
+ Due to the limited diversity and the similar nature of query and target images in existing retrieval benchmarks, the performance of state-of-the-art DML algorithms are near saturation. Moreover, since these datasets come with little structure, the opportunities to analyze failure cases and improve algorithms are limited. Motivated by this, we de
77
+
78
+ rive a challenging large-scale benchmark dataset from ABO with hundreds of diverse categories and a proper validation set. We also leverage the 3D nature of ABO to measure and improve the robustness of representations with respect to changes in viewpoint and scene. A comparison of ABO and existing benchmarks for DML can be found in Table 2.
79
+
80
+ # 3. The ABO Dataset
81
+
82
+ Dataset Properties The ABO dataset originates from worldwide product listings, metadata, images and 3D models provided by Amazon.com. This data consists of 147,702 listings of products from 576 product types sold by various Amazon-owned stores and websites (e.g. Amazon, PrimeNow, Whole Foods). Each listing is identified by an item ID and is provided with structured metadata corresponding to information that is publicly available on the listing's main webpage (such as product type, material, color, and dimensions) as well as the media available for that product. This includes 398, 212 high-resolution catalog images, and, when available, the turntable images that are used for the "360° View" feature that shows the product imaged at $5^{\circ}$ or $15^{\circ}$ azimuth intervals (8, 222 products).
83
+
84
+ 3D Models ABO also includes 7,953 artist-created high-quality 3D models in gTF 2.0 format. The 3D models are oriented in a canonical coordinate system where the "front" (when well defined) of all objects are aligned and each have a scale corresponding to real world units. To enable these meshes to easily be used for comparison with existing methods trained on 3D datasets such as ShapeNet, we have collected category annotations for each 3D model and mapped them to noun synsets under the WordNet [47] taxonomy. Figure 4 shows a histogram of the 3D model categories.
85
+
86
+ Catalog Image Pose Annotations We additionally provide 6-DOF pose annotations for 6,334 of the catalog images. To achieve this, we develop an automated pipeline for pose
87
+
88
+ <table><tr><td rowspan="2"></td><td colspan="6">Chamfer Distance (↓)</td><td colspan="6">Absolute Normal Consistency (↑)</td></tr><tr><td>bench</td><td>chair</td><td>couch</td><td>cabinet</td><td>lamp</td><td>table</td><td>bench</td><td>chair</td><td>couch</td><td>cabinet</td><td>lamp</td><td>table</td></tr><tr><td>3D R2N2 [13]</td><td>2.46/0.85</td><td>1.46/0.77</td><td>1.15/0.59</td><td>1.88/0.25</td><td>3.79/2.02</td><td>2.83/0.66</td><td>0.51/0.55</td><td>0.59/0.61</td><td>0.57/0.62</td><td>0.53/0.67</td><td>0.51/0.54</td><td>0.51/0.65</td></tr><tr><td>Occ Nets [46]</td><td>1.72/0.51</td><td>0.72/0.39</td><td>0.86/0.30</td><td>0.80/0.23</td><td>2.53/1.66</td><td>1.79/0.41</td><td>0.66/0.68</td><td>0.67/0.76</td><td>0.70/0.77</td><td>0.71/0.77</td><td>0.65/0.69</td><td>0.67/0.78</td></tr><tr><td>GenRe [69]</td><td>1.54/2.86</td><td>0.89/0.79</td><td>1.08/2.18</td><td>1.40/2.03</td><td>3.72/2.47</td><td>2.26/2.37</td><td>0.63/0.56</td><td>0.69/0.67</td><td>0.66/0.60</td><td>0.62/0.59</td><td>0.59/0.57</td><td>0.61/0.59</td></tr><tr><td>Mesh R-CNN [22]</td><td>1.05/0.09</td><td>0.78/0.13</td><td>0.45/0.10</td><td>0.80/0.11</td><td>1.97/0.24</td><td>1.15/0.12</td><td>0.62/0.65</td><td>0.62/0.70</td><td>0.62/0.72</td><td>0.65/0.74</td><td>0.57/0.66</td><td>0.62/0.74</td></tr></table>
89
+
90
+ Table 3. Single-view 3D reconstruction generalization from ShapeNet to ABO. Chamfer distance and absolute normal consistency of predictions made on ABO objects from common ShapeNet classes. We report the same metrics for ShapeNet objects (denoted in gray), following the same evaluation protocol. All methods, with the exception of GenRe, are trained on all of the ShapeNet categories listed.
91
+
92
+ estimation based on the knowledge of the 3D model in the image, off-the-shelf instance masks [28, 34], and differentiable rendering. For each mask $\mathbf{M}$ , we estimate $\mathbf{R} \in SO(3)$ and $\mathbf{T} \in \mathbb{R}^3$ such that the following silhouette loss is minimized
93
+
94
+ $$
95
+ \mathbf {R} ^ {*}, \mathbf {T} ^ {*} = \operatorname * {a r g m i n} _ {\mathbf {R}, \mathbf {T}} \| D R (\mathbf {R}, \mathbf {T}) - \mathbf {M} \|
96
+ $$
97
+
98
+ where $DR(\cdot)$ is a differentiable renderer implemented in PyTorch3D [52]. Examples of results from this approach can be found in Figure 2. Unlike previous approaches to CAD-to-image alignment [57, 63] that use human annotators in-the-loop to provide pose or correspondences, our approach is fully automatic except for a final human verification step.
99
+
100
+ Material Estimation Dataset To perform material estimation from images, we use the Disney [9] base color, metallic, roughness parameterization given in glTF 2.0 specification [25]. We render $512 \times 512$ images from 91 camera positions along an upper icosphere of the object with a $60^{\circ}$ field-of-view using Blender's [14] Cycles path-tracer. To ensure diverse realistic lighting conditions and backgrounds, we illuminate the scene using 3 random environment maps out of 108 indoor HDRIs [23]. For these rendered images, we generate the corresponding ground truth base color, metallicness, roughness, and normal maps along with the object depth map and segmentation mask. The resulting dataset consists of 2.1 million rendered images and corresponding camera intrinsics and extrinsics.
101
+
102
+ # 4. Experiments
103
+
104
+ # 4.1. Evaluating Single-View 3D Reconstruction
105
+
106
+ As existing methods are largely trained in a fully supervised manner using ShapeNet [10], we are interested in how well they will transfer to more real-world objects. To measure how well these models transfer to real object instances, we evaluate the performance of a variety of these methods on objects from ABO. Specifically we evaluate
107
+
108
+ 3D-R2N2 [13], GenRe [69], Occupancy Networks [46], and Mesh R-CNN [22] pre-trained on ShapeNet. We selected these methods because they capture some of the top-performing single-view 3D reconstruction methods from the past few years and are varied in the type of 3D representation that they use (voxels in [13], spherical maps in [69], implicit functions in [46], and meshes in [22]) and the coordinate system used (canonical vs. view-space). While all the models we consider are pre-trained on ShapeNet, GenRe trains on a different set of classes and takes as input a silhouette mask at train and test time.
109
+
110
+ To study this question (irrespective of the question of cross-category generalization), we consider only the subset of ABO models objects that fall into ShapeNet training categories. Out of the 63 categories in ABO with 3D models, we consider 6 classes that intersect with commonly used ShapeNet classes, capturing 4,170 of the 7,953 3D models. Some common ShapeNet classes, such as "airplane", have no matching ABO category; similarly, some categories in ABO like "air conditioner" and "weights" do not map well to ShapeNet classes.
111
+
112
+ For this experiment, we render a dataset (distinct from the ABO Material Estimation Dataset) of objects on a blank background from a similar distribution of viewpoints as in the rendered ShapeNet training set. We render 30 viewpoints of each mesh using Blender [14], each with a $40^{\circ}$ field-of-view and such that the entire object is visible. Camera azimuth and elevation are sampled uniformly on the surface of a unit sphere with a $-10^{\circ}$ lower limit on elevations to avoid uncommon bottom views.
113
+
114
+ GenRe and Mesh-RCNN make their predictions in "view-space" (i.e. pose aligned to the image view), whereas R2N2 and Occupancy Networks perform predictions in canonical space (predictions are made in the same category-specific, canonical pose despite the pose of the object in an image). For each method we evaluate Chamfer Distance and Absolute Normal Consistency and largely follow the evaluation protocol of [22].
115
+
116
+ Results A quantitative comparison of the four methods
117
+
118
+ we considered on ABO objects can be found in Table 3. We also re-evaluated each method's predictions on the ShapeNet test set from R2N2 [13] with our evaluation protocol and report those metrics. We observe that Mesh R-CNN [22] outperforms all other methods across the board on both ABO and ShapeNet in terms of Chamfer Distance, whereas Occupancy Networks performs the best in terms of Absolute Normal Consistency. As can be seen, there is a large performance gap between all ShapeNet and ABO predictions. This suggests that shapes and textures from ABO, while derived from the same categories but from the real world, are out of distribution and more challenging for the models trained on ShapeNet. Further, we notice that the lamp category has a particularly large performance drop from ShapeNet to ABO. Qualitative results suggest that this is likely due to the difficulty in reconstructing thin structures. We highlight some qualitative results in Figure 5, including one particularly challenging lamp instance.
119
+
120
+ # 4.2. Material Prediction
121
+
122
+ To date, there are not many available datasets tailored to the material prediction task. Most publicly available datasets with large collections of 3D objects [10, 12, 19] do not contain physically-accurate reflectance parameters that can be used for physically-based rendering to generate photorealistic images. Datasets like PhotoShape [51] do contain such parameters but are limited to a single category. In contrast, the realistic 3D models in ABO are artist-created and have highly varied shapes and SV-BRDFs. We leverage this unique property to derive a benchmark for material prediction with large amounts of photorealistic synthetic data. We also present a simple baseline approach for both single- and multi-view material estimation of complex geometries.
123
+
124
+ Method To evaluate single-view and multi-view material prediction and establish a baseline approach, we use a U-Net-based model with a ResNet-34 backbone to estimate SV-BRDFs from a single viewpoint. The U-Net has a common encoder that takes an RGB image as input and has a multi-head decoder to output each component of the SV-BRDF separately. Inspired by recent networks in [7, 17], we align images from multiple viewpoints by projection using depth maps, and bundle the original image and projected image pairs as input data to enable an analogous approach for the multi-view network. We reuse the single-view architecture for the multi-view network and use global max pooling to handle an arbitrary number of input images. Similar to [16], we utilize a differentiable rendering layer to render the flash illuminated ground truth and compare it to similarly rendered images from our predictions to better regularize the network and guide the training process. Ground truth material maps are used for direct supervision.
125
+
126
+ Our model takes as input 256x256 rendered images. For training, we randomly subsample 40 views on the icosphere
127
+
128
+ <table><tr><td></td><td>SV-net</td><td>MV-net (no proj.)</td><td>MV-net</td></tr><tr><td>Base Color (↓)</td><td>0.129</td><td>0.132</td><td>0.127</td></tr><tr><td>Roughness (↓)</td><td>0.163</td><td>0.155</td><td>0.129</td></tr><tr><td>Metallicness (↓)</td><td>0.170</td><td>0.167</td><td>0.162</td></tr><tr><td>Normals (↑)</td><td>0.970</td><td>0.949</td><td>0.976</td></tr><tr><td>Render (↓)</td><td>0.096</td><td>0.090</td><td>0.086</td></tr></table>
129
+
130
+ Table 4. ABO material estimation results for the single-view, multi-view, and multi-view network without projection (MV-net no proj.) ablation. Base color, roughness, metallicness and rendering loss are measured using RMSE (lower is better) - normal similarity is measured using cosine similarity (higher is better).
131
+
132
+ for each object. In the case of the multi-view network, for each reference view we select its immediate 4 adjacent views as neighboring views. We use mean squared error as the loss function for base color, roughness, metallicness, surface normal and render losses. Each network is trained for 17 epochs using the AdamW optimizer [45] with a learning rate of 1e-3 and weight decay of 1e-4.
133
+
134
+ Results Results for the single-view network (SV-net) and multi-view network (MV-net) can be found in Table 4. The multi-view network has better performance compared to single-view network in terms of the base color, roughness, metallicness, and surface normal prediction tasks. The multi-view network is especially better at predicting properties that affect view-dependent specular components like roughness and metallicness.
135
+
136
+ We also run an ablation study on our multi-view network without using 3D structure to align neighboring views to reference view (denoted as MV-net: no projection). First, we observe that even without 3D structure-based alignment, the network still outperforms the single-view network on roughness and metallic predictions. Comparing to the multi-view network, which uses 3D structure-based alignment, we can see structure information leads to better performance for all parameters. We show some qualitative results from the test set in Figure 6.
137
+
138
+ As a focus of ABO is enabling real-world transfer, we also test our multi-view network on catalog images of objects from the test set using the pose annotations gathered by the methodology in Section 3, and use the inferred material parameters to relight the object (Figure 7). Despite the domain gap in lighting and background, and shift from synthetic to real, our network trained on rendered images makes reasonable predictions on the real catalog images. In one case (last row), the network fails to accurately infer the true base color, likely due to the presence of self-shadow.
139
+
140
+ # 4.3. Multi-View Cross-Domain Object Retrieval
141
+
142
+ Merging the available catalog images and 3D models in ABO, we derive a novel benchmark for object retrieval
143
+
144
+ ![](images/d0952d0d3a5b56477b688640e398d2471af33ad166d41dad90bc9a301d7b32e1.jpg)
145
+ Figure 6. Qualitative material estimation results for single-view (SV-net) and multi-view (MV-net) networks. We show estimated SV-BRDF properties (base color, roughness, metallicness, surface normals) for each input view of an object compared to the ground truth.
146
+
147
+ ![](images/2821790ffd5eabe1ab0cc9dab18988106aab96d5e0df060cbf37e3ca78c7e5c7.jpg)
148
+ Figure 7. Qualitative multi-view material estimation results on real catalog images. Each of the multiple views is aligned to the reference view using the catalog image pose annotations.
149
+
150
+ ![](images/b24432dfa1c90c944559bc312084575cab8c306e388511988e14f769b7a161e2.jpg)
151
+
152
+ with the unique ability to measure the robustness of algorithms with respect to viewpoint changes. Specifically, we leverage the renderings described in Section 3, with known azimuth and elevation, to provide more diverse views and scenes for training deep metric learning (DML) algorithms. We also use these renderings to evaluate the retrieval performance with respect to a large gallery of catalog images from ABO. This new benchmark is very challenging because the rendered images have complex and cluttered indoor backgrounds (compared to the cleaner catalog images) and display products with viewpoints that are not typically present in the catalog images. These two sources of images are indeed two separate image domains, making the test scenario a multi-view cross-domain retrieval task.
153
+
154
+ Method To compare the performance of state-of-the-art DML methods on our multi-view cross-domain retrieval benchmark, we use PyTorch Metric Learning [2] implementations that cover the main approaches to DML: NormSoftmax [68] (classification-based), ProxyNCA [48] (proxy-based) and Contrastive, TripletMargin, NTXent [11] and Multi-similarity [61] (tuple-based). We leveraged the Powerful Benchmarker framework [1] to run fair and controlled comparisons as in [49], including Bayesian hyperparameter optimization.
155
+
156
+ We opted for a ResNet-50 [29] backbone, projected it to 128D after a LayerNorm [4] layer, did not freeze the BatchNorm parameters and added an image padding transformation to obtain undistorted square images before resizing to $256 \times 256$ . We used batches of 256 samples with 4 samples per class, except for NormSoftmax and ProxyNCA where we obtained better results with a batch size of 32 and 1 sample per class. After hyperparameter optimization, we trained all losses for 1000 epochs and chose the best epoch based on the validation Recall@1 metric, computing it only every other epoch.
157
+
158
+ Importantly, whereas catalog and rendered images in the training set are balanced (188K vs 111K), classes with and without renderings are not (4K vs. 45K). Balancing them in each batch proved necessary to obtain good performance: not only do we want to exploit the novel viewpoints and scenes provided by the renderings to improve the retrieval performance, but there are otherwise simply not sufficiently many negative pairs of rendered images being sampled.
159
+
160
+ <table><tr><td rowspan="2">Recall@k (%)</td><td colspan="4">Rendered images</td><td>Catalog</td></tr><tr><td>k=1</td><td>k=2</td><td>k=4</td><td>k=8</td><td>k=1</td></tr><tr><td>Pre-trained</td><td>5.0</td><td>8.1</td><td>11.4</td><td>15.3</td><td>18.0</td></tr><tr><td>Constrastive</td><td>28.6</td><td>38.3</td><td>48.9</td><td>59.1</td><td>39.7</td></tr><tr><td>Multi-similarity</td><td>23.1</td><td>32.2</td><td>41.9</td><td>52.1</td><td>38.0</td></tr><tr><td>NormSoftmax</td><td>30.0</td><td>40.3</td><td>50.2</td><td>60.0</td><td>35.5</td></tr><tr><td>NTXent</td><td>23.9</td><td>33.0</td><td>42.6</td><td>52.0</td><td>37.5</td></tr><tr><td>ProxyNCA</td><td>29.4</td><td>39.5</td><td>50.0</td><td>60.1</td><td>35.6</td></tr><tr><td>TripletMargin</td><td>22.1</td><td>31.1</td><td>41.3</td><td>51.9</td><td>36.9</td></tr></table>
161
+
162
+ Table 5. Test performance of state-of-the-art deep metric learning methods on the ABO retrieval benchmark. Retrieving products from rendered images highlights performance gaps that are not as apparent when using catalog images.
163
+
164
+ Results As shown in Table 5, the ResNet-50 baseline trained on ImageNet largely fails at the task (Recall@1 of $5\%$ ). This confirms the challenging nature of our novel benchmark. DML is thus key to obtain significant improvements. In our experiments, NormSoftmax, ProxyNCA and Contrastive performed better ( $\approx 29\%$ ) than the Multisimilarity, NTXent or TripletMargin losses ( $\approx 23\%$ ), a gap which was not apparent in other datasets, and is not as large when using cleaner catalog images as queries. Moreover, it is worth noting that the overall performance on ABO is significantly lower than for existing common benchmarks (see Table 2). This confirms their likely saturation [49], the value in new and more challenging retrieval tasks, and the need for novel metric learning approaches to handle the large scale and unique properties of our new benchmark.
165
+
166
+ Further, the azimuth $(\theta)$ and elevation $(\varphi)$ angles available for rendered test queries allow us to measure how performance degrades as these parameters diverge from typical product viewpoints in ABO's catalog images. Figure 8 highlights two main regimes for both azimuth and elevation: azimuths beyond $|\theta| = 75^{\circ}$ and elevations above $\varphi = 50^{\circ}$ are significantly more challenging to match, consistently for all approaches. Closing this gap is an interesting direction of future research on DML for multi-view object retrieval. For one, the current losses do not explicitly model the geometric information in training data.
167
+
168
+ # 5. Conclusion
169
+
170
+ In this work we introduced ABO, a new dataset to help bridge the gap between real and synthetic 3D worlds. We demonstrated that the set of real-world derived 3D models in ABO are a challenging test set for ShapeNet-trained 3D reconstruction approaches, and that both view- and canonical-space methods do not generalize well to ABO meshes despite sampling them from the same distribution of training classes. We also trained both single-view and multi-view networks for SV-BRDF material estimation of
171
+
172
+ ![](images/ab34347bf20c3b17051e8c9262c1cd73c24581a7d472fb73fda3b4ca4bf70f17.jpg)
173
+
174
+ ![](images/87c210c5ae56b1f494dd0707a1ef148da5a600636c20683643de4a762d3e789a.jpg)
175
+ Figure 8. Recall@1 as a function of the azimuth and elevation of the product view. For all methods, retrieval performance degrades rapidly beyond azimuth $|\theta| > 75^{\circ}$ and elevation $\varphi > 50^{\circ}$ .
176
+
177
+ complex, real-world geometries - a task that is uniquely enabled by the nature of our 3D dataset. We found that incorporating multiple views leads to more accurate disentanglement of SV-BRDF properties. Finally, joining the larger set of products images with synthetic renders from ABO 3D models, we proposed a challenging multi-view retrieval task that alleviates some of the limitations in diversity and structure of existing datasets, which are close to performance saturation. The 3D models in ABO allowed us to exploit novel viewpoints and scenes during training and benchmark the performance of deep metric learning algorithms with respect to the azimuth and elevation of query images.
178
+
179
+ While not considered in this work, the large amounts of text annotations (product descriptions and keywords) and non-rigid products (apparel, home linens) enable a wide array of possible language and vision tasks, such as predicting styles, patterns, captions or keywords from product images. Furthermore, the 3D objects in ABO correspond to items that naturally occur in a home, and have associated object weight and dimensions. This can benefit robotics research and support simulations of manipulation and navigation.
180
+
181
+ Acknowledgements We thank Pietro Perona and Frederic Devernay. This work was funded in part by an NSF GRFP (#1752814) and the Amazon-BAIR Commons Program.
182
+
183
+ # References
184
+
185
+ [1] Powerful benchmarker. https://kevinmusgrave.github.io/powerful-benchmarker. Accessed: 2022-03-19.7
186
+ [2] Pytorch metric learning. https://kevinmusgrave.github.io/pytorch-metric-learning. Accessed: 2022-03-19. 7
187
+ [3] Adel Ahmadyan, Liangkai Zhang, Jianing Wei, Artsiom Ablavatski, and Matthias Grundmann. Objectron: A large scale dataset of object-centric videos in the wild with pose annotations. arXiv preprint arXiv:2012.09988, 2020. 3
188
+ [4] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. 7
189
+ [5] Miguel Angel Bautista, Walter Talbott, Shuangfei Zhai, Nitish Srivastava, and Joshua M Susskind. On the generalization of learning-based 3d reconstruction. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2180–2189, 2021. 1
190
+ [6] Jan Bechtold, Maxim Tatarchenko, Volker Fischer, and Thomas Brox. Fostering generalization in single-view 3d reconstruction by learning a hierarchy of local and global shape priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15880-15889, 2021. 1
191
+ [7] Sai Bi, Zexiang Xu, Kalyan Sunkavalli, David Kriegman, and Ravi Ramamoorthi. Deep 3d capture: Geometry and reflectance from sparse multi-view images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5960-5969, 2020. 3, 4, 6
192
+ [8] Mark Boss, Varun Jampani, Kihwan Kim, Hendrik Lensch, and Jan Kautz. Two-shot spatially-varying brdf and shape estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3982-3991, 2020. 4
193
+ [9] Brent Burley and Walt Disney Animation Studios. Physically-based shading at disney. In ACM SIGGRAPH, volume 2012, pages 1-7. vol. 2012, 2012. 5
194
+ [10] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. 1, 2, 4, 5, 6
195
+ [11] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR, 2020. 7
196
+ [12] Sungjoon Choi, Qian-Yi Zhou, Stephen Miller, and Vladlen Koltun. A large dataset of object scans. arXiv preprint arXiv:1602.02481, 2016. 2, 3, 6
197
+ [13] Christopher B Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, and Silvio Savarese. 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In European conference on computer vision, pages 628-644. Springer, 2016. 1, 3, 5, 6
198
+
199
+ [14] Blender Online Community. Blender - a 3D modelling and rendering package. Blender Foundation, Stichting Blender Foundation, Amsterdam, 2018. 5
200
+ [15] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009. 1
201
+ [16] Valentin Deschaintre, Miika Aittala, Fredo Durand, George Drettakis, and Adrien Bousseau. Single-image svbrdf capture with a rendering-aware deep network. ACM Transactions on Graphics (TOG), 37(4):128, 2018. 3, 6
202
+ [17] Valentin Deschaintre, Miika Aittala, Frédo Durand, George Drettakis, and Adrien Bousseau. Flexible svbrdf capture with a multi-image deep network. In Computer Graphics Forum, volume 38, pages 1-13. Wiley Online Library, 2019. 3, 4, 6
203
+ [18] Haoqiang Fan, Hao Su, and Leonidas J Guibas. A point set generation network for 3d object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 605-613, 2017. 3
204
+ [19] Huan Fu, Rongfei Jia, Lin Gao, Mingming Gong, Binqiang Zhao, Steve Maybank, and Dacheng Tao. 3d-future: 3d furniture shape with texture. arXiv preprint arXiv:2009.09633, 2020. 2, 6
205
+ [20] Duan Gao, Xiao Li, Yue Dong, Pieter Peers, Kun Xu, and Xin Tong. Deep inverse rendering for high-resolution svbrdf estimation from an arbitrary number of images. ACM Transactions on Graphics (TOG), 38(4):134, 2019. 3, 4
206
+ [21] Georgia Gkioxari, Jitendra Malik, and Justin Johnson. Mesh r-cnn. In ICCV, 2019. 2
207
+ [22] Georgia Gkioxari, Jitendra Malik, and Justin Johnson. Mesh r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, pages 9785-9795, 2019. 3, 5, 6
208
+ [23] Andreas Mischok Greg Zaal, Sergej Majboroda. Hdrihaven. https://hdrihaven.com/. Accessed: 2020-11-16. 5
209
+ [24] Thibault Groueix, Matthew Fisher, Vladimir G Kim, Bryan C Russell, and Mathieu Aubry. A papier-mâché approach to learning 3d surface generation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 216-224, 2018. 1
210
+ [25] Khronos Group. gltf 2.0 specification. https://github.com/KhronosGroup/glTF. Accessed: 2020-11-16. 5
211
+ [26] Agrim Gupta, Piotr Dollar, and Ross Girshick. Lvis: A dataset for large vocabulary instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5356-5364, 2019. 1
212
+ [27] Richard Hartley and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003. 3
213
+ [28] Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross Girshick. Mask r-cnn. In ICCV, 2017. 5
214
+ [29] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 7
215
+
216
+ [30] HeeJae Jun, ByungSoo Ko, Youngjoon Kim, Insik Kim, and Jongtack Kim. Combination of multiple global descriptors for image retrieval. arXiv preprint arXiv:1903.10663, 2019. 3
217
+ [31] Abhishek Kar, Christian Häne, and Jitendra Malik. Learning a multi-view stereo machine. In Advances in neural information processing systems, pages 365-376, 2017. 3
218
+ [32] Kihwan Kim, Jinwei Gu, Stephen Tyree, Pavlo Molchanov, Matthias Nießner, and Jan Kautz. A lightweight approach for on-the-fly reflectance estimation. In Proceedings of the IEEE International Conference on Computer Vision, pages 20-28, 2017. 3
219
+ [33] Sungyeon Kim, Dongwon Kim, Minsu Cho, and Suha Kwak. Proxy anchor loss for deep metric learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. 3
220
+ [34] Alexander Kirillov, Yuxin Wu, Kaiming He, and Ross Girshick. Pointrend: Image segmentation as rendering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9799-9808, 2020. 5
221
+ [35] Sebastian Koch, Albert Matveev, Zhongshi Jiang, Francis Williams, Alexey Artemov, Evgeny Burnaev, Marc Alexa, Denis Zorin, and Daniele Panozzo. Abc: A big cad model dataset for geometric deep learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9601-9611, 2019. 1
222
+ [36] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia, 2013. 4
223
+ [37] Alex Krizhevsky et al. Learning multiple layers of features from tiny images. 2009. 1
224
+ [38] Xiao Li, Yue Dong, Pieter Peers, and Xin Tong. Modeling surface appearance from a single photograph using self-augmented convolutional neural networks. ACM Transactions on Graphics (TOG), 36(4):45, 2017. 3
225
+ [39] Yangyan Li, Hao Su, Charles Ruizhongtai Qi, Noa Fish, Daniel Cohen-Or, and Leonidas J. Guibas. Joint embeddings of shapes and images via cnn image purification. ACM Trans. Graph., 2015. 4
226
+ [40] Zhengqin Li, Zexiang Xu, Ravi Ramamoorthi, Kalyan Sunkavalli, and Manmohan Chandraker. Learning to reconstruct shape and spatially-varying reflectance from a single image. In SIGGRAPH Asia 2018 Technical Papers, page 269. ACM, 2018. 3, 4
227
+ [41] Joseph J Lim, Hamed Pirsiavash, and Antonio Torralba. Parsing aka objects: Fine pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, pages 2992-2999, 2013. 2
228
+ [42] Joseph J Lim, Hamed Pirsiavash, and Antonio Torralba. Parsing aka objects: Fine pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, pages 2992-2999, 2013. 2
229
+ [43] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer, 2014. 1
230
+
231
+ [44] Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang, and Xiaou Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 4
232
+ [45] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. 6
233
+ [46] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4460-4470, 2019. 1, 3, 5
234
+ [47] George A Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41, 1995. 4
235
+ [48] Yair Movshovitz-Attias, Alexander Toshev, Thomas K. Leung, Sergey Ioffe, and Saurabh Singh. No fuss distance metric learning using proxies. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017. 7
236
+ [49] Kevin Musgrave, Serge Belongie, and Ser-Nam Lim. A metric learning reality check. In European Conference on Computer Vision, pages 681-699. Springer, 2020. 7, 8
237
+ [50] Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via lifted structured feature embedding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 4
238
+ [51] Keunhong Park, Konstantinos Rematas, Ali Farhadi, and Steven M Seitz. Photoshape: Photorealistic materials for large-scale shape collections. arXiv preprint arXiv:1809.09761, 2018. 2, 3, 6
239
+ [52] Nikhila Ravi, Jeremy Reizenstein, David Novotny, Taylor Gordon, Wan-Yen Lo, Justin Johnson, and Georgia Gkioxari. Accelerating 3d deep learning with pytorch3d. arXiv preprint arXiv:2007.08501, 2020. 5
240
+ [53] Jeremy Reizenstein, Roman Shapovalov, Philipp Henzler, Luca Sbordone, Patrick Labatut, and David Novotny. Common objects in 3d: Large-scale learning and evaluation of real-life 3d category reconstruction. arXiv preprint arXiv:2109.00512, 2021. 2, 3
241
+ [54] Google Research. Google scanned objects, August. 2
242
+ [55] Arjun Singh, James Sha, Karthik S Narayan, Tudor Achim, and Pieter Abbeel. Bigbird: A large-scale 3d database of object instances. In 2014 IEEE international conference on robotics and automation (ICRA), pages 509-516. IEEE, 2014. 2
243
+ [56] Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In Proceedings of the IEEE international conference on computer vision, pages 843–852, 2017. 1
244
+ [57] Xingyuan Sun, Jiajun Wu, Xiuming Zhang, Zhoutong Zhang, Chengkai Zhang, Tianfan Xue, Joshua B Tenenbaum, and William T Freeman. Pix3d: Dataset and methods for single-image 3d shape modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2974-2983, 2018. 2, 3, 5
245
+ [58] Maxim Tatarchenko, Stephan R Richter, René Ranftl, Zhuwen Li, Vladlen Koltun, and Thomas Brox. What do
246
+
247
+ single-view 3d reconstruction networks learn? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3405-3414, 2019. 1
248
+ [59] Shubham Tulsiani, Tinghui Zhou, Alexei A Efros, and Jitendra Malik. Multi-view supervision for single-view reconstruction via differentiable ray consistency. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2626-2634, 2017. 3
249
+ [60] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011. 4
250
+ [61] Xun Wang, Xintong Han, Weilin Huang, Dengke Dong, and Matthew R Scott. Multi-similarity loss with general pair weighting for deep metric learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5022-5030, 2019. 7
251
+ [62] Olivia Wiles and Andrew Zisserman. Silnet: Single-and multi-view reconstruction by learning from silhouettes. arXiv preprint arXiv:1711.07888, 2017. 3
252
+ [63] Yu Xiang, Wonhui Kim, Wei Chen, Jingwei Ji, Christopher Choy, Hao Su, Roozbeh Mottaghi, Leonidas Guibas, and Silvio Savarese. Objectnet3d: A large scale database for 3d object recognition. In European Conference on Computer Vision, pages 160-176. Springer, 2016. 1, 2, 5
253
+ [64] Yu Xiang, Roozbeh Mottaghi, and Silvio Savarese. Beyond Pascal: A benchmark for 3d object detection in the wild. In IEEE winter conference on applications of computer vision, pages 75-82. IEEE, 2014. 1, 2
254
+ [65] Haozhe Xie, Hongxun Yao, Xiaoshuai Sun, Shangchen Zhou, and Shengping Zhang. Pix2vox: Context-aware 3d reconstruction from single and multi-view images. In Proceedings of the IEEE International Conference on Computer Vision, pages 2690-2698, 2019. 1, 3
255
+ [66] Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, and Honglak Lee. Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision. In Advances in neural information processing systems, pages 1696-1704, 2016. 3
256
+ [67] Wenjie Ye, Xiao Li, Yue Dong, Pieter Peers, and Xin Tong. Single image surface appearance modeling with self-augmented cnns and inexact supervision. In Computer Graphics Forum, volume 37, pages 201-211. Wiley Online Library, 2018. 3
257
+ [68] Andrew Zhai and Hao-Yu Wu. Classification is a strong baseline for deep metric learning. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (BMVC), 2019. 7
258
+ [69] Xiuming Zhang, Zhoutong Zhang, Chengkai Zhang, Josh Tenenbaum, Bill Freeman, and Jiajun Wu. Learning to reconstruct shapes from unseen classes. In Advances in Neural Information Processing Systems, pages 2257-2268, 2018. 3, 5
259
+ [70] Qingnan Zhou and Alec Jacobson. Thingi10k: A dataset of 10,000 3d-printing models. arXiv preprint arXiv:1605.04797, 2016. 1
abodatasetandbenchmarksforrealworld3dobjectunderstanding/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80e85ebddaa3aadbfb68c96753d7b0538d537c8141d2be380f3ffad6217766e6
3
+ size 535029
abodatasetandbenchmarksforrealworld3dobjectunderstanding/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d36df99b0655ab1bc5f08cbb46be00c8fffb7b853018caf33fdd3d36bc2010a1
3
+ size 307664
abpnadaptiveblendpyramidnetworkforrealtimelocalretouchingofultrahighresolutionphoto/0836d8e6-e6e7-4830-9182-ed9613a4b549_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ecad11078f54cbca924837ee3aa4710da0a07af1c18d97586433e6e2ed94ff2
3
+ size 81479
abpnadaptiveblendpyramidnetworkforrealtimelocalretouchingofultrahighresolutionphoto/0836d8e6-e6e7-4830-9182-ed9613a4b549_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c9aa9ca1f41837557eb80cfd099388ce6fe0fed0666f3bfd1b0676f0cdd66ac
3
+ size 103731
abpnadaptiveblendpyramidnetworkforrealtimelocalretouchingofultrahighresolutionphoto/0836d8e6-e6e7-4830-9182-ed9613a4b549_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0183f9c644ee2c40c20bceaa6dca3d68f5285b0b0fe105029d196dac3aac5fce
3
+ size 2935763
abpnadaptiveblendpyramidnetworkforrealtimelocalretouchingofultrahighresolutionphoto/full.md ADDED
@@ -0,0 +1,374 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ABPN: Adaptive Blend Pyramid Network for Real-Time Local Retouching of Ultra High-Resolution Photo
2
+
3
+ Biwen Lei†, Xiefan Guo*, Hongyu Yang, Miaomiao Cui†, Xuansong Xie†, Di Huang†
4
+ †DAMO Academy, Alibaba Group
5
+
6
+ biwen.lbw@alibaba-inc.com, {guoxiefan, hongyu.yang.csv}@gmail.com, miaomiao.cmm@alibaba-inc.com, xingtong.xxs@taobao.com, dhuang.csv@outlook.com
7
+
8
+ # Abstract
9
+
10
+ Photo retouching finds many applications in various fields. However, most existing methods are designed for global retouching and seldom pay attention to the local region, while the latter is actually much more tedious and time-consuming in photography pipelines. In this paper, we propose a novel adaptive blend pyramid network, which aims to achieve fast local retouching on ultra high-resolution photos. The network is mainly composed of two components: a context-aware local retouching layer (LRL) and an adaptive blend pyramid layer (BPL). The LRL is designed to implement local retouching on low-resolution images, giving full consideration of the global context and local texture information, and the BPL is then developed to progressively expand the low-resolution results to the higher ones, with the help of the proposed adaptive blend module and refining module. Our method outperforms the existing methods by a large margin on two local photo retouching tasks and exhibits excellent performance in terms of running speed, achieving real-time inference on 4K images with a single NVIDIA Tesla P100 GPU. Moreover, we introduce the first high-definition cloth retouching dataset CRHD-3K to promote the research on local photo retouching. The dataset is available at https://github.com/youngLBW/CRHD-3K.
11
+
12
+ # 1. Introduction
13
+
14
+ Photo retouching [25], especially portrait photo retouching, finds a vast range of applications in photography scenarios including wedding, advertisement, personal recording, etc. While extensive works [5, 12, 14, 21, 46, 57] yield impressive results on photo retouching, most of them manipulate the attributes of the entire image, such as color, illumination, and exposure. Few methods deal with the local region in photos (e.g., face, clothing, and commodity),
15
+
16
+ ![](images/b65fd658a80419d5f37e2a29f3033cbd3ec483b1ac8fa22494300efe3378d085.jpg)
17
+
18
+ ![](images/e9075a2c1bf06973e803e0e8e64aea436d743fe0e16e6eb71c60e3dd571e59ac.jpg)
19
+ (a) Input
20
+
21
+ ![](images/2d2ea59ae9da464ad8642cbe349da59f8afe1cd575316202cbac468417f34a46.jpg)
22
+
23
+ ![](images/67c167e3425642365ae9a7b697ba4dd04293b84239a4d6a4f554ac44eb057156.jpg)
24
+ (b) Ours
25
+
26
+ ![](images/450b20b8f8752798cc73328628343d7ba8c26f3f56366823ccefb99ece596205.jpg)
27
+
28
+ ![](images/2a4c95fc667f0f8a2868c30a22c4fd274ab9a3abd9cec60e18dfe794b3921ded.jpg)
29
+ (c) Target
30
+ Figure 1. High-fidelity retouched photos. From left to right: (a) raw photos, (b) our retouched results, and (c) ground-truth images.
31
+
32
+ which is actually the most tedious and time-consuming step in professional photography pipelines.
33
+
34
+ To focus on this kind of problem, we summarize them as the Local Photo Retouching (LPR) task, whose goal is to edit the target region in the photo and keep the rest area unchanged. Different from general local image editing tasks (such as image inpainting and rain removal), LPR pays more attention to enhancing the aesthetic perception and visual quality of the target object. Fig. 1 gives some LPR examples.
35
+
36
+ We conclude three main challenges of the LPR task as: (1) accurate localization of the target region; (2) local generation with global consistency and detail fidelity; and (3) efficient processing of ultra high-resolution images. The first two are brought by the characteristics of the task itself, while the last one is determined by the application scenarios of LPR. As ultra high-resolution photos have been widely used in various photographic scenes, the ability to process them becomes a key factor of LPR methods in practice. Given these challenges above, we in this paper analyze the applicability of existing methods to the LPR task and attempt to propose a more suitable solution to it.
37
+
38
+ In recent years, massive works have devoted to the image-to-image translation task and achieve impressive results in style transfer [11, 16, 19, 45], semantic image synthesis [7, 18, 37], etc. Most of them adopt a deep network with an encoding-decoding paradigm to fulfill faithful translation, which results in a heavy computational, thus severely limiting their applications in some high-resolution scenarios. Some methods [12, 25, 47, 52] try to accelerate the models by transferring the computational burden from high-resolution maps to low-resolution ones and successfully accomplish global translation on high-resolution images. However, due to the lack of attention to local regions, few of them well adapt to the LPR task.
39
+
40
+ Instead of performing global translation, a number of works focus on the local image editing task, such as image inpainting [28, 39, 55], shadow removal [15, 32, 33], and rain removal [40-42, 48, 49]. Most of them rely on the masks that indicate the target region as input, while in the LPR task, accurately acquiring such masks is itself a quite challenging issue. Though some methods resort to the deep generative networks and perform local editing without specifying the masks, they are hardly capable of processing ultra high-resolution images directly. Besides, AutoRetouch [46] employs a sliding window strategy to achieve local modeling and retouching, but it fails to capture the global context, especially in the case of high resolution.
41
+
42
+ Based on the observations, we propose a novel adaptive blend pyramid network (ABPN) for local retouching of ultra high-resolution photos, as shown in Fig. 3. The network addresses the three challenges aforementioned via two components: a context-aware local retouching layer (LRL) and an adaptive blend pyramid layer (BPL). In general, given a high-resolution image, the LRL performs local retouching on its thumbnail and the subsequent BPL expands the outputs of LRL to the original size of the input. For LRL, specifically, we design a novel multi-task architecture to fulfill mask prediction of the target region and local generation simultaneously. A local attentive module (LAM) is proposed, where the local semantics and texture of the target region and the global context can be fully captured and aggregated to achieve consistent local retouching. For BPL, inspired by the blend layer in digital image editing, we develop a light-weight adaptive blend module (ABM) and its reverse version (R-ABM) to implement the fast expansion from the low-resolution results to the higher ones, ensuring great extensibility and detail fidelity. Extensive experiments on two LPR tasks reveal that our method outperforms the existing methods by a large margin in terms of retouching quality and processing efficiency, demonstrating its superiority in the LPR task.
43
+
44
+ Moreover, since the editing work is usually time-consuming and requires high image processing skills, there are few publicly available datasets for the LPR task. Ac
45
+
46
+ cordingly, we build and release the first high-definition cloth retouching dataset (CRHD-3K) to facilitate the research.
47
+
48
+ Our main contributions in this work are as follows:
49
+
50
+ (A) We propose a novel framework ABPN for local retouching of ultra high-resolution photos, which exhibits the remarkable efficiency performance (real-time inference on 4K images with a single NVIDIA Tesla P100 GPU) and superior retouching quality to the existing methods.
51
+ (B) We present a local attentive module (LAM), which is effective in capturing and aggregating the global context and local texture.
52
+ (C) We design an adaptive blend module (ABM), which provides powerful extensibility to the framework, allowing the fast expansion from low-resolution results to the higher ones.
53
+ (D) To boost the research on LPR (e.g., cloth retouching), we introduce the first high-definition cloth retouching dataset CRHD-3K.
54
+
55
+ # 2. Related Work
56
+
57
+ Photo Retouching. Benefiting from the development of deep convolutional neural networks, learning-based methods [5,10,12,14,21,46,50,57] have recently been presented to produce exciting results on photo retouching. Most of those, however, are limited by the heavy computational and memory costs when the photo resolution is increased. In addition, these methods are designed for global photo retouching and do not well fit for the LPR task.
58
+
59
+ Image-to-Image Translation. Image-to-image translation was originally defined by [18], in which many computer-vision tasks were summarized as a pixel-to-pixel predicting job and a conditional GANs-based framework was developed as a general solution. Following [18], various methods have been proposed to address the image translation problem, using paired images [7,18,27,37,43,47,52] or unpaired images [3,8,9,16,17,23,25,30,36,38,59]. Several works focus on a specific image translation task (such as semantic image synthesis [7,18,37] and style transfer [11,16,19,45]) and achieve impressive performance. However, the works above mainly concentrate on global transformation and give less attention to the local region, which limits their capability in the LPR task.
60
+
61
+ Image Inpainting. Image Inpainting is the closest task to LPR, which refers to the process of reconstructing missing regions of an image given a corresponding mask. The deep generative methods [13,22,26,28,29,35,39,51,53-56,58] have achieved significant progress, owing to their powerful feature learning ability. However, acquiring accurate masks is itself a very challenging issue, and taking unreasonable masks tends to incur large errors in filled results. Recently, the blind image inpainting methods [6, 31, 53] relax the restriction by completing the visual contents without specifying masks for missing regions. Nevertheless, those methods
62
+
63
+ ![](images/a5d4d67c6efb0211cb361dc81fe46b6a2c7c5780553602ac4e335c45c8fef652.jpg)
64
+ Figure 2. Examples from the CRHD-3K Dataset (zoom in for a better view). Left: raw photos, right: retouched results by professional staffs with high image processing expertise.
65
+
66
+ assume the contamination with simple data distributions or undesired images, which makes them fail to take full advantages of the inherent semantics and textures of the image for LPR. Moreover, the existing methods can only handle low-resolution inputs, ultra high-resolution image inpainting is still extremely challenging. There are also some local image editing tasks that aim to restore the local region in the image, including shadow removal [15, 32, 33], rain removal [40-42, 48, 49], etc. Unfortunately, due to the strong specificity of these methods, few of them are adaptive for the common LPR task.
67
+
68
+ High-resolution Image Editing. To enable translation on high-resolution images, [12, 25, 47, 52] attempt to alleviate the space and time burden by shifting the major computation from high-resolution maps to low-resolution ones. Though yielding impressive efficiency performance, it is still problematic when applied to LPR as the lack of attention to the local regions.
69
+
70
+ # 3. The CRHD-3K Dataset
71
+
72
+ Photo retouching [24] refers to the process of enhancing the visual aesthetic quality of an image, and cloth retouching is one of the most representative tasks, which is conventionally achieved via hand-craft operations. However, the process of manual retouching is tedious and time-consuming. In order to facilitate the learning-based retouching methods, we introduce the first large-scale high-definition cloth retouching (CRHD-3K) dataset.
73
+
74
+ Data collection. We initially collected more than 60,000 raw photos from Unsplash $^{1}$ , and further carefully checked them one by one, where outliers (e.g., severe motion blur) and duplicates (e.g., same content) were removed. The CRHD-3K dataset finally includes 3,022 high-definition raw portrait photos.
75
+
76
+ Data labeling. To obtain high-quality retouched photos, the process is accomplished by a team of professional image editors, with the goal of removing the wrinkles, creases, and other blemishes on the clothes to make them look more smooth and beautiful. The retouching time for each photo is 3 to 5 minutes. Some retouched examples are shown in Fig. 2.
77
+
78
+ Data statistics. The CRHD-3K dataset consists of 3,022 pairs of raw and retouched photos, of which 2,522 are for training and 500 for testing. The resolutions mainly vary in the range of 4K to 6K.
79
+
80
+ Ethics guidelines. To avoid the attendant risk of harm from the data, we blurred and cropped the personally identifiable information contained in the photos (e.g., faces), and kept only the clothing components as much as possible.
81
+
82
+ Cloth retouching is a typical and quite challenging LPR task due to the diversity of clothing patterns and the subjectivity of wrinkle judgment. More importantly, ultra high-resolution images from the CRHD-3K dataset place extremely strict requirements on the time and space efficiency of the model.
83
+
84
+ # 4. Methods
85
+
86
+ # 4.1. Overview
87
+
88
+ As discussed above, subject to the lack of attention to local regions or the high computational costs, the existing methods are difficult to cope with the LPR task. To solve these problems, we develop an adaptive blend pyramid network for local retouching of ultra high-resolution photos. Fig. 3 shows an overview of our framework. The network is mainly composed of two components: a context-aware local retouching layer (LRL) and an adaptive blend pyramid layer (BPL). Given an image $I_0 \in \mathbb{R}^{h \times w \times 3}$ , we first build an image pyramid $P_I = [I_0, I_1, \dots, I_l]$ and a high-frequency component pyramid $P_H = [H_0, H_1, \dots, H_{l-1}]$ , where $P_H$ is acquired following Laplacian Pyramid [4] and $l$ is the number of downsampling operations ( $l = 2$ as default in Fig. 3). Then LRL is applied to $I_l \in \mathbb{R}^{\frac{h}{2l} \times \frac{w}{2l} \times 3}$ to predict the target region mask $M$ and generate the retouched results $R_l \in \mathbb{R}^{\frac{h}{2l} \times \frac{w}{2l} \times 3}$ . After that, we employ BPL to expand the low-resolution outputs $R_l$ to the original size of $I_0$ . Specifically, the reverse adaptive blend module (R-ABM) is introduced to generate the blend layer $B_l \in \mathbb{R}^{\frac{h}{2l} \times \frac{w}{2l} \times 3}$ , which records the translation information from $I_l$ to $R_l$ . By progressively upsampling and refining, the blend layer $B_0$ with high resolutions and abundant details is obtained. At last, we utilize the adaptive blend module (ABM) to apply $B_0$ to $I_0$ to generate the final results $R_0$ .
89
+
90
+ We introduce these sub-networks and loss functions used
91
+
92
+ ![](images/235e345c25fd2b8162a9b7a77f3247992a889288be96a2dde150da4a61b516ee.jpg)
93
+ Figure 3. Overview of the proposed Adaptive Blend Pyramid Network (ABPN).
94
+
95
+ ![](images/d4195fcb9211ff31ff03b3f22455bf22c8ab7df5aeb8c6bf491255234a197cd5.jpg)
96
+ Figure 4. The details of the local attentive module (LAM).
97
+
98
+ for training in detail in the following sections, including LRL in Sec. 4.2, BPL in Sec. 4.3, and loss functions in Sec. 4.4.
99
+
100
+ # 4.2. Context-aware Local Retouching Layer
101
+
102
+ In this section, we propose a context-aware local retouching layer (LRL) to address the first two challenges mentioned in Sec. 1: accurate localization of the target region and local generation with global consistency. As shown in Fig. 3, the LRL adopts a multi-task architecture and consists of a mutual encoder, a mask prediction branch (MPB) and a local retouching branch (LRB).
103
+
104
+ Mutual Encoder. The mutual encoder is composed of six simple convolution blocks ( $3 \times 3$ convolutions, batch normalization, and ReLU) in series, and the output of each convolution block composes a feature pyramid $P_F = [F_{skip_i} \in \mathbb{R}^{\frac{h}{2^{l + i}} \times \frac{w}{2^{l + i}} \times c_i}]_{i=0}^6$ , where $c_i$ denotes the number of channels and $F_{skip_0} = I_0$ . Sharing the encoder with the subsequent MPB and LRB is feasible because both of the two branches rely on the semantic features and contextual information to generate their results. It also greatly reduces the computational complexity of the model.
105
+
106
+ Mask Prediction Branch. MPB aims to automatically predict the mask $M \in \mathbb{R}^{\frac{h}{2^{l + 2}} \times \frac{w}{2^{l + 2}} \times 1}$ of the target region to guide subsequent local region generation. It consists of four convolution blocks (3 × 3 convolutions, batch normalization, and leakyReLU) and a classification head. Besides, we employ skip connections [44] to incorporate low-level features to improve the accuracy of segmentation. Note that
107
+
108
+ $M$ is $4\times$ smaller than $\pmb{I}_l$ but it is sufficient for the guidance of LRB, without sacrifice to the overall performance. Although most of the datasets do not directly provide the target region mask $M_{gt}$ for supervision, owing to the characteristics of the LPR task, we can obtain the $M_{gt}$ by taking a difference between $\pmb{I}_l$ and its target and applying a threshold to it.
109
+
110
+ The contributions of MPB to the network are two-fold. First, the predicted mask $M$ itself can help LRB focus on the target region to enhance the retouching quality. Second, through joint training, the global context and semantic information can be better perceived, thereby achieving consistent generation results.
111
+
112
+ Local Retouching Branch. Most image translation methods adopt a traditional encoder-decoder architecture to implement global translation, which leads to insufficient attention to the target regions. Based on the gated convolution (GConv) [55], we thus design a local attentive module (LAM) to improve capturing local semantics and textures, as shown in Fig 4. Different from image inpainting, the target region in LPR contains rich texture information, which is essential to generate detailed and realistic results. In this case, we apply skip connections to incorporate low-level features $F_{skip_i}$ from the mutual encoder. Besides, instead of only involving the binary mask in the first or the last block of LRB, we concatenate the soft mask $M$ in every LAM to guide feature fusion and update at different levels. Owing to the gating mechanism of GConv, spatial attention and channel attention are simultaneously employed to fully fuse the features and capture the semantics and textures of the target region. By stacking LAM, LRB is then able to produce consistent and faithful local retouched results. Note that although the predicted mask may have errors, the final retouching area could still not be affected as the mask is only used as soft guidance in LRB.
113
+
114
+ # 4.3. Adaptive Blend Pyramid Layer
115
+
116
+ LRL achieves local retouching on a low-resolution image, and the following objective is to extend the result to a larger scale while simultaneously enhancing its detail fidelity. Inspired by the concept of blend layer (or top layer) in the digital image editing, we propose an adaptive blend module (ABM) and its reverse one (R-ABM) to achieve lossless transformation between two images with a sparse and smooth blend layer. Then, we build a pyramid to progressively upsample and refine the blend layer and finally apply it to the original input to generate the final result. We describe the implementation details of these components below.
117
+
118
+ Adaptive Blend Module. The blend layer is often utilized to be blended with the image (or base layer) in various modes [1] to fulfill different image editing tasks, such as contrast adjustment, dodge and burn. Generally, given an input image $\mathbf{I} \in \mathbb{R}^{h \times w \times 3}$ and a blend layer $\mathbf{B} \in \mathbb{R}^{h \times w \times 3}$ , we blend the two layers to produce the result $\mathbf{R} \in \mathbb{R}^{h \times w \times 3}$ as:
119
+
120
+ $$
121
+ \boldsymbol {R} = f (\boldsymbol {I}, \boldsymbol {B}) \tag {1}
122
+ $$
123
+
124
+ where $f$ is a pixel-wise function and denotes the mapping formula determined by the blend mode. Limited by the translation ability, a certain blend mode with the fixed function $f$ is difficult to apply to various image editing tasks. To better adapt to the data distribution and the transformation patterns of different tasks, we refer to the Soft Light blend mode [2] and design an adaptive blend module (ABM) as follows:
125
+
126
+ $$
127
+ g (\boldsymbol {I}, i) = \boldsymbol {E} \underbrace {\odot \boldsymbol {I} \odot \boldsymbol {I} \cdots \odot \boldsymbol {I}} _ {i} \tag {2}
128
+ $$
129
+
130
+ $$
131
+ \boldsymbol {R} = f _ {a} (\boldsymbol {I}, \boldsymbol {B}) = \sum_ {i = 0} ^ {2} \left(\left(j _ {i} \boldsymbol {B} + k _ {i} \boldsymbol {E}\right) \odot g (\boldsymbol {I}, i)\right) \tag {3}
132
+ $$
133
+
134
+ where $\odot$ indicates the Hadamard product, $j_{i}$ and $k_{i}$ are learnable parameters shared by ABMs and R-ABM in the framework, and $\pmb{E} \in \mathbb{R}^{h \times w \times 3}$ denotes a constant matrix with the value 1 for all items.
135
+
136
+ Reverse Adaptive Blend Module. ABM is based on the prerequisite of the blend layer $\pmb{B}$ . However, we only obtain the low-resolution results $\pmb{R}_l$ in the previous LRL. To acquire the blend layer $\pmb{B}$ , we solve Eq. (3) and build a reverse adaptive blend module (R-ABM) as:
137
+
138
+ $$
139
+ \boldsymbol {B} = f _ {r} (\boldsymbol {I}, \boldsymbol {R}) = \frac {\boldsymbol {R} - \sum_ {i = 0} ^ {2} \left(k _ {i} g (\boldsymbol {I} , i)\right)}{\sum_ {i = 0} ^ {2} \left(j _ {i} g (\boldsymbol {I} , i)\right)} \tag {4}
140
+ $$
141
+
142
+ where $j_{i},k_{i}$ and $g$ are consistent with those in Eq. (3).
143
+
144
+ In general, utilizing the blend layer as an intermediate medium, ABM and R-ABM offer an adaptive transformation between the image $\pmb{I}$ and the result $\pmb{R}$ . Instead of directly expanding the low-resolution result, we employ the
145
+
146
+ blend layer to achieve this goal, which has its advantages on two aspects: (1) In the LPR task, the blend layer mainly records local transformation between two images. That means it contains less irrelevant information and can be readily refined with a light-weight network. (2) The blend layer is to be blended with the original image to implement final retouching, which makes full use of the information of the image itself, thereby delivering local retouching with a high detail fidelity.
147
+
148
+ Actually, there are plenty of alternative functions or strategies to achieve adaptive blend. An intuitive way is to utilize two networks composed of $1 \times 1$ convolutions and nonlinear activation layers to replace Eq. (3) and Eq. (4) respectively. However, the transformations from the two networks are irreversible and may increase the difficulty in training. In contrast, good reversibility and consistency between ABM and R-ABM ensure that all the blend layers lie in the same domain, which effectively reduces the burden on the model. Moreover, Eq. (3) is a generalized form of the Pegtop's formula [2], which is easy to optimize and tends to produce a smooth and sparse blend layer (see Fig. 7 and Fig. 8). As in our framework, we fulfill the expansion by progressively upsampling and refining the blend layer. Smoothness and sparseness mean a smaller information gap between the low-resolution blend layer and its high-resolution target, which greatly eases the burden on the refining module. See experimental results toward ABM in Sec. 5.4 for its superiority.
149
+
150
+ ABM and R-ABM hold simple structures but fully consider the characteristics of the LPR task and provide powerful extensibility to the framework, facilitating fast expansion of the low-resolution results at a negligible cost.
151
+
152
+ Refining Module. To apply the low-resolution blend layers to high-resolution images, the refining module is essential to compensating the information loss caused by downsampling. Since the blend layer is initially generated from the low-resolution result, it is short of the transformation information of high-frequency components. We thus include the high-frequency component of the image as an additional input for the refining module. Owing to the smoothness and sparsity of the blend layer produced from R-ABM, we can build a light-weight refining module as:
153
+
154
+ $$
155
+ \boldsymbol {B} _ {i} = \phi_ {2} \left(h \left(\phi_ {1} \left(\operatorname {C a t} \left(\operatorname {u p} \left(\boldsymbol {B} _ {i + 1}\right), \boldsymbol {H} _ {i}\right)\right)\right)\right) + \operatorname {u p} \left(\boldsymbol {B} _ {i + 1}\right) \tag {5}
156
+ $$
157
+
158
+ where up denotes bilinear interpolation, Cat is channelwise concatenation, $H_{i}$ ( $i \in \{0,1,\dots,l - 1\}$ ) is the high-frequency component of image $I_{i}$ , $\phi_{1}$ and $\phi_{2}$ are $3 \times 3$ convolutions with 16 and 3 filters respectively, and $h$ indicates leaky ReLU with negative slop 0.2.
159
+
160
+ Given the input and output of LRL, we first adopt Eq. (4) to calculate a primitive blend layer $B_{l}$ . By continuously upsampling and refining the blend layer, we then obtain a high-resolution blend layer $B_{0}$ with detailed transformation
161
+
162
+ ![](images/45e2fa5c9222fe9aeab37a91d321736c39b8d4533ad2599dc124f699b858b0ad.jpg)
163
+ Figure 5. Qualitative comparison on FFHQR and CRHD-3K (zoom in for a better view): (a) original images, (b) VCNet [53], (c) AutoRetouch [46], (d) pix2pixHD [52], (e) ASAPNet [47], (f) LPTN [25], (g) Ours, and (h) ground-truth images.
164
+
165
+ information. At last, Eq. (3) is applied to $B_{0}$ and $I_{0}$ to deliver the final result.
166
+
167
+ # 4.4. Loss Functions
168
+
169
+ The model is trained in an end-to-end manner, and the loss functions that we utilize for training consist of (i) the multi-scale mean squared-error (MSE) loss $\mathcal{L}_{mse} = \sum_{i=0}^{l}||\pmb{R}_{gt_i} - \pmb{R}_i||_2^2$ , (ii) the perceptual loss $\mathcal{L}_{perc}$ [19] is only applied to the low-resolution outputs $\pmb{R}_l$ for saving training memory cost, (iii) the adversarial loss $\mathcal{L}_{adv}$ [18] for the final outputs $\pmb{R}_0$ , (iv) the dice loss $\mathcal{L}_{dice}$ [34] for the predicted mask $M$ of MPB, and (v) the total variation loss $\mathcal{L}_{tv}$ [19] for each blend layer $\pmb{B}_i$ ( $i \in \{0,1,\dots,l\}$ ). In summary, the joint loss is written as:
170
+
171
+ $$
172
+ \begin{array}{l} \mathcal {L} _ {\text {j o i n t}} = \lambda_ {1} \mathcal {L} _ {\text {m s e}} + \lambda_ {2} \mathcal {L} _ {\text {p e r c}} + \lambda_ {3} \mathcal {L} _ {\text {a d v}} \\ + \lambda_ {4} \mathcal {L} _ {d i c e} + \lambda_ {5} \mathcal {L} _ {t v}, \\ \end{array}
173
+ $$
174
+
175
+ where $\lambda_1 = \lambda_4 = 1$ and $\lambda_{2} = \lambda_{3} = \lambda_{5} = 0.1$ as default.
176
+
177
+ # 5. Experiments
178
+
179
+ # 5.1. Experimental Settings
180
+
181
+ Datasets. To verify the effectiveness and generalization of our model in LPR, we conduct experiments on two typical and popular local retouching scenarios: cloth retouching (CRHD-3K) and face retouching (FFHQR). The CRHD-3K dataset is described in Sec. 3. FFHQR [46] is a large-scale face retouching dataset based on FFHQ [20], which contains 70,000 high-definition face images from FFHQ and
182
+
183
+ their corresponding retouched images. To enable comparison with the methods having diverse inference ability, we pad and resize all the images to $1024 \times 1024$ for training and evaluation in our experiments. Besides, we show the performance of the proposed network on CRHD-3K in the case of different resolutions (from $480\mathrm{p}$ to $4\mathrm{K}$ ) in Sec. 5.5. CRHD-3K is randomly divided into a training set of 2,522 images and a test set of 500 images, and FFHQR is split into train/val/test set as in [46].
184
+
185
+ Implementation details. Our model and baselines are implemented using PyTorch 1.0 on Python 3.6 and trained on a single NVIDIA Tesla P100 GPU. We train our model using the Adam optimizer. With a batch size of 8, the learning rate is $5 \times 10^{-4}$ initially and reduced by $10 \times$ after 100 epochs. We set $l$ at 2 as default in our experiments. Training the whole framework to convergence takes about 18 hours on CRHD-3K and about 70 hours on FFHQR.
186
+
187
+ # 5.2. Qualitative Comparison
188
+
189
+ Fig. 5 compares the images generated by the proposed model with those by the current state-of-the-art methods on the FFHQR [46] and CRHD-3K datasets. As we can see, pix2pixHD [52], ASAPNet [47], and LPTN [25] are limited in handling the LPR task, and fail to distinguish the retouching regions, resulting in global transfer. Moreover, visual artifacts are observed in the results of pix2pixHD [52] and ASAPNet [47]. VCNet [53] and AutoRetouch [46] yield competitive results; however, the details are still less elegant than ours. To sum up, the proposed model outperforms
190
+
191
+ ![](images/1d9cb0e4a702efa1fdac9714528b64b024f275c8519d9e6e7af81bc7795ffa3d.jpg)
192
+ (a) Input
193
+
194
+ ![](images/8c2315356aa2e79d603f10d909aa82ffb38f99d88e6cc3c9c89e93a1daacec7f.jpg)
195
+ (b) w/o MPB
196
+ IoU=10.8
197
+ Figure 6. Ablation study toward MPB and LAM on CRHD-3K. The masks presented in the upper right corner of the last four columns show the changing area relative to the input, following the same processing method illustrated in Sec. 4.2.
198
+
199
+ ![](images/fe5a7c1b91640c56cb45d7cc8ee2852432869540e4f3e6fbcb813995c5ee6a38.jpg)
200
+ (c) w/o LAM
201
+ $\mathrm{IoU} = 72.9$
202
+
203
+ ![](images/1864203641e201a629e9b7cff32d179c20e8c1cb0c52621002b53a45e18ba20b.jpg)
204
+
205
+ ![](images/248d52e4061ef16366951df9beb398a97af414a4163b1eabf185c3615ce49114.jpg)
206
+ (d) Ours
207
+ IoU=79.6
208
+
209
+ ![](images/2193c0357c74f180c029eab6dba56e9dc299afef534f08ce9d2c4a25ae19921d.jpg)
210
+ (e) Target
211
+
212
+ ![](images/9cf6b32778435fc4a3348911f42112bdadeb5300c43e88adda3ab0c79d967fb2.jpg)
213
+
214
+ <table><tr><td>Datasets</td><td colspan="4">FFHQR [46]</td><td colspan="4">CRHD-3K</td><td></td></tr><tr><td>Metrics</td><td>LPIPS†</td><td>PSNR¶</td><td>SSIM¶</td><td>User Study¶</td><td>LPIPS†</td><td>PSNR¶</td><td>SSIM¶</td><td>User Study¶</td><td>Time†</td></tr><tr><td>VCNet [53]</td><td>0.039</td><td>38.36</td><td>0.973</td><td>13.3%</td><td>0.084</td><td>31.99</td><td>0.902</td><td>6.0%</td><td>0.197</td></tr><tr><td>AutoRetouch [46]</td><td>0.025</td><td>41.83</td><td>0.986</td><td>18.0%</td><td>0.081</td><td>32.70</td><td>0.907</td><td>7.3%</td><td>0.057</td></tr><tr><td>pix2pixHD [52]</td><td>0.053</td><td>31.39</td><td>0.952</td><td>2.0%</td><td>0.101</td><td>27.23</td><td>0.892</td><td>1.3%</td><td>0.055</td></tr><tr><td>ASAPNet [47]</td><td>0.163</td><td>26.21</td><td>0.910</td><td>0.0%</td><td>0.101</td><td>30.31</td><td>0.887</td><td>4.7%</td><td>0.015</td></tr><tr><td>LPTN [25]</td><td>0.069</td><td>37.42</td><td>0.949</td><td>4.0%</td><td>0.042</td><td>35.09</td><td>0.963</td><td>20.0%</td><td>0.035</td></tr><tr><td>Ours</td><td>0.018</td><td>44.35</td><td>0.993</td><td>62.7%</td><td>0.029</td><td>37.35</td><td>0.971</td><td>60.7%</td><td>0.009</td></tr></table>
215
+
216
+ Table 1. Objective quantitative comparison ( ${}^{ \dagger }$ Lower is better; ${}^{\# }$ Higher is better).
217
+
218
+ the counterparts with reasonable retouched results of high detail fidelity.
219
+
220
+ # 5.3.Quantitative Comparison
221
+
222
+ Objective evaluation. We quantitatively evaluate the proposed method with three metrics: LPIPS, PSNR and SSIM. Table 1 shows the results achieved on the FFHQR [46] and CRHD-3K datasets, where the proposed method achieves the best results compared with the other approaches, clearly demonstrating its effectiveness.
223
+
224
+ User study. We evaluate the proposed method via a human subjective study. 10 volunteers with image processing expertise were invited to choose the most elegant image from those generated by the proposed method and the state-of-the-art approaches. Specifically, each participant has 15 questions from FFHQR [46] and 15 questions from CRHD-3K. We tally the votes and show the statistics in Table 1. Our method performs favorably against the other methods.
225
+
226
+ Inference time. We evaluate the inference time of all the models on images of $1024 \times 1024$ pixels with a single NVIDIA Tesla P100 GPU (16 GB). As shown in Table 1, VCNet [53], AutoRetouch [46] and pix2pixHD [52] are computationally expensive on high-resolution images. Thanks to the proposed adaptive blend pyramid architecture, our model outperforms the other methods regarding the time consumption.
227
+
228
+ # 5.4. Ablation Study
229
+
230
+ In order to verify the rationality and effectiveness of the proposed components, we conduct extensive ablation experiments on the CRHD-3K dataset. Table 2 shows the quantitative results, including ablation comparison for MPB,
231
+
232
+ LAM, the refining module (RM), and some major blend methods. As revealed in the table, MPB plays a key role in the architecture, contributing a $\sim 4\%$ improvement. We replace LAM with PCB proposed in VCNet [53], and the results show that LAM achieves a $\sim 1\%$ improvement. RM produces a $\sim 2.5\%$ improvement. We also compare the results by adopting different blend modes for image translation, and ABM yields an improvement of $1\sim 1.5\%$ compared to other methods. Below we analyze the effectiveness of each module in detail based on the visualization results.
233
+
234
+ On MPB. MPB realizes the localization of the target region to guide local retouching. With the assistance of the mask predicted by MPB, LRB achieves a better semantic perception of the image under a limited model capacity. As shown in Fig. 6, without MPB (column b), the model produces a certain blur effect in the non-target region (the local region on the top), and it is susceptible to background distraction. The changing areas of the results show that MPB helps to keep the non-target region intact to a large extent. Moreover, thanks to the attention to the local target region, precise retouched results are obtained.
235
+
236
+ On LAM. We compare LAM with PCB [53], which exhibits its effectiveness in the image inpainting task. As shown in Fig. 6 (column c), the network with PCB fails to make full use of the textures of the target region and results in the loss of details that should be preserved. In contrast, our LAM renders local retouching with high detail fidelity.
237
+
238
+ On ABM. To validate the effectiveness of ABM for extending local retouched results from low resolution to high resolution, we compare it with various blend methods as well as other translation strategies. As shown in Fig. 7, directly upsampling and refining the RGB results loses plenty of
239
+
240
+ ![](images/ab6d545dc16de37f82c062bc54e6c4c6ce7a50aa31dd76b329395c33832d24e8.jpg)
241
+
242
+ ![](images/a825ac898915ab646313989b7895a86ed089fae4c9da2eeba4b9c2fb8bb7cd9c.jpg)
243
+ (a) Input
244
+
245
+ ![](images/e635d9285e49947aef6e48b11699e9874b75a80455be8c39fdab2bd38175ef60.jpg)
246
+
247
+ ![](images/24e66e40fa418d2d0fb8d54a45b480ef4419beb92d500604a896bbce28f2eec1.jpg)
248
+ (b)Refine RGB
249
+
250
+ ![](images/44c82b36fd0ae1ce5060c2cd33c7d7afbc67c6177b531b7a1b9a76cc6dbd53ba.jpg)
251
+
252
+ ![](images/1e378068af69346ef4f5a3a122b05d9cc20bb7d6147b0f63b1e64226bc78e718.jpg)
253
+ (c) Addition
254
+
255
+ ![](images/1b97134d4a6c8b748bfe3f1761ec6c79ace26e7cc796443c122d7a2a850e1faa.jpg)
256
+
257
+ ![](images/7430c55d7ad029af586bc804b5dc049565b9fef2d876fafa620aefed552ed965.jpg)
258
+ (d) Soft Light
259
+
260
+ ![](images/352c9c21c43bc46c09ff87cdbe3a55114813e06633baa83f02f7ae6e7e859ecb.jpg)
261
+
262
+ ![](images/eab99219ca63d946510a0657bc354924fe077c7d7665d565898b82a12004605c.jpg)
263
+ (e) Convs
264
+
265
+ ![](images/ece574a72e937b0637783bb8954432d14def741e03e69fcf4db4fdb90027cd71.jpg)
266
+
267
+ ![](images/6a12e8147b14cb32726b185483ab0ca21517a8e1ea51e337c6504ee3828512bd.jpg)
268
+ (f) Ours
269
+
270
+ ![](images/63a10ea9dc2f92763c2d7a98702393bdb58a7863899c79de4d75730152a91ee6.jpg)
271
+
272
+ ![](images/80791e4163ee29ffab28893d620eaa72d52dbfc783c56afdae0d2bf66f5a5e99.jpg)
273
+ (g) Target
274
+
275
+ ![](images/13676bb09a30ff21ffbf7fd42690ef4818cb3771c72af734c5d0d70391a21411.jpg)
276
+ Figure 7. Visual comparison among different blend methods on CRHD-3K, including (b) refining RGB directly, (c) Addition [1], (d) Soft Light [2], (e) adaptive blend with convolutions and (f) ours. To facilitate visualization, we scale all the blend layer values to $0 \sim 255$ .
277
+ (a) Input
278
+ Figure 8. Ablation study toward the Refining Module on FFHQR and CRHD-3K. For better observation, we only present some local regions of the blend layers and the corresponding RGB results.
279
+
280
+ ![](images/173ffeaed5d22b3abf8d02492d139566d820a69d835d8f856880786cd4bbf381.jpg)
281
+ (b) w/o Refining Module
282
+
283
+ ![](images/222037de4aecc2e0f08f2de739b71348ecc8880a706852a7a7cc9424ba80d330.jpg)
284
+ (c) Ours
285
+
286
+ ![](images/eb3539b57b2804e8008261026c18cf35e7800414edcdc1f56c589c9c055c31f8.jpg)
287
+ (d) Target
288
+
289
+ details, resulting in blurred effects. We adopt some existing blend modes with fixed functions used in digital image editing, such as Addition [1] and Soft Light [2]. The Addition blend mode that adopts linear translation is unable to fit well when the color of the local region changes severely. Limited by the transformation ability, the soft light blend mode cannot greatly change the pixel value near 0 and 255 (as shown in the column d). We also design two 3-layer convolutional networks to replace Eq. (3) and Eq. (4) respectively for adaptive blend. However, subject to the irreversibility of the two translations, it is prone to produce a color difference. With powerful transformation capabilities and good reversibility, the proposed ABM module achieves much more smooth and realistic results.
290
+
291
+ On RM. The refining module is proposed to progressively compensate for the deficiency of details in the low-resolution blend layer. As shown in Fig. 8, RM gains massive details for the blend layer, so as to complete precise retouching of the local region.
292
+
293
+ # 5.5. High-resolution Expansion Capability
294
+
295
+ BPL has a powerful ability to expand upward. By increasing $l$ in Fig. 3, we can achieve local retouching on ultra high-resolution photos at a very low cost. Table 3 shows the quantitative results and runtime of our model at different resolutions. It can be seen that even for 4K resolution images, the model still achieves good retouched results at a super fast speed. Visual examples of 4K images are pro
296
+
297
+ <table><tr><td rowspan="2">MPB</td><td rowspan="2">LAM</td><td colspan="5">Blend methods</td><td rowspan="2">RM</td><td rowspan="2">PSNR</td></tr><tr><td>RGB</td><td>Addition</td><td>Soft Light</td><td>Convs</td><td>Ours</td></tr><tr><td></td><td>✓</td><td></td><td></td><td></td><td></td><td>✓</td><td>✓</td><td>33.02</td></tr><tr><td>✓</td><td></td><td></td><td></td><td></td><td></td><td>✓</td><td>✓</td><td>36.24</td></tr><tr><td>✓</td><td>✓</td><td></td><td></td><td></td><td></td><td>✓</td><td></td><td>34.78</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td></td><td></td><td></td><td></td><td>✓</td><td>35.76</td></tr><tr><td>✓</td><td>✓</td><td></td><td>✓</td><td></td><td></td><td></td><td>✓</td><td>36.57</td></tr><tr><td>✓</td><td>✓</td><td></td><td></td><td>✓</td><td></td><td></td><td>✓</td><td>36.10</td></tr><tr><td>✓</td><td>✓</td><td></td><td></td><td></td><td>✓</td><td></td><td>✓</td><td>35.88</td></tr><tr><td>✓</td><td>✓</td><td></td><td></td><td></td><td></td><td>✓</td><td>✓</td><td>37.35</td></tr></table>
298
+
299
+ Table 2. Quantitative ablation experiments on CRHD-3K.
300
+
301
+ <table><tr><td>Resolution</td><td>LPIPS†</td><td>PSNR¶</td><td>SSIM¶</td><td>Runtime</td><td>Memory</td></tr><tr><td>512×512 (l=1)</td><td>0.027</td><td>37.50</td><td>0.971</td><td>0.008</td><td>1043MB</td></tr><tr><td>1024×1024 (l=2)</td><td>0.029</td><td>37.35</td><td>0.971</td><td>0.009</td><td>1329MB</td></tr><tr><td>2048×2048 (l=3)</td><td>0.029</td><td>37.24</td><td>0.968</td><td>0.010</td><td>2505MB</td></tr><tr><td>4096×4096 (l=4)</td><td>0.030</td><td>37.19</td><td>0.969</td><td>0.014</td><td>7191MB</td></tr></table>
302
+
303
+ Table 3. Comparison of evaluation metrics, runtime, and memory consumption of our model in the case of different resolutions on CRHD-3K. The runtime denotes the average inference time of all test samples on a single NVIDIA Tesla P100 GPU (16 GB).
304
+
305
+ vided in the supplementary material.
306
+
307
+ # 6. Conclusion
308
+
309
+ We summarize a kind of photo retouching as the local photo retouching (LPR) task and develop a novel solution to it, giving full consideration to the intrinsic characteristics of the task itself. Specifically, we design a context-aware local retouching layer based on a multi-task architecture to implement mask prediction and local retouching simultaneously. By utilizing the predicted mask as guidance, global context and local texture can be fully captured to render consistent retouching. Then, we build a pyramid based on the adaptive blend module and the refining module to expand the low-resolution results to the high-resolution ones progressively, showing great extensibility and high fidelity. Consequently, our method exhibits excellent performance in terms of the retouching quality as well as the running speed, achieving real-time inference on 4K images with a single NVIDIA Tesla P100 GPU. In addition, we introduce the first high-definition clothing retouching dataset CRHD-3K to promote the research on clothing retouching and LPR.
310
+
311
+ # References
312
+
313
+ [1] Blend modes. https://en.wikipedia.org/wiki/ Blend Modes.5,8
314
+ [2] Pegtop blend modes: soft light. http://www.pegtop.net/delphi/articles/blendmodes/softlight.htm.5,8
315
+ [3] Kyungjune Baek, Yunjoy Choi, Youngjung Uh, Jaejun Yoo, and Hyunjung Shim. Rethinking the truly unsupervised image-to-image translation. In ICCV, 2021. 2
316
+ [4] P. J. Burt and E. H. Adelson. The laplacian pyramid as a compact image code. Readings in Computer Vision, 31(4):671-679, 1987. 3
317
+ [5] Jianrui Cai, Shuhang Gu, and Lei Zhang. Learning a deep single image contrast enhancer from multi-exposure images. TIP, 2018. 1, 2
318
+ [6] Nian Cai, Zhenghang Su, Zhineng Lin, Han Wang, Zhijing Yang, and Bingo Wing-Kuen Ling. Blind inpainting using the fully convolutional neural network. The Visual Computer, 2017. 2
319
+ [7] Qifeng Chen and Vladlen Koltun. Photographic image synthesis with cascaded refinement networks. In ICCV, 2017. 2
320
+ [8] Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In CVPR, 2018. 2
321
+ [9] Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. pages 8188-8197, 2020. 2
322
+ [10] Yubin Deng, Chen Change Loy, and Xiaou Tang. Aesthetic-driven image enhancement by adversarial learning. In ACM MM, 2018. 2
323
+ [11] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016. 2
324
+ [12] Michael Gharbi, Jiawen Chen, Jonathan T Barron, Samuel W Hasinoff, and Fredo Durand. Deep bilateral learning for real-time image enhancement. TOG, 2017. 1, 2, 3
325
+ [13] Xiefan Guo, Hongyu Yang, and Di Huang. Image inpainting via conditional texture and structure dual generation. In ICCV, 2021. 2
326
+ [14] Jingwen He, Yihao Liu, Yu Qiao, and Chao Dong. Conditional sequential modulation for efficient global image retouching. In ECCV, 2020. 1, 2
327
+ [15] Xiaowei Hu, Yitong Jiang, Chi-Wing Fu, and Pheng-Ann Heng. Mask-shadowgan: Learning to remove shadows from unpaired data. In ICCV, 2019. 2, 3
328
+ [16] Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017. 2
329
+ [17] Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. In ECCV, 2018. 2
330
+ [18] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In CVPR, 2017. 2, 6
331
+
332
+ [19] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016. 2, 6
333
+ [20] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, 2019. 6
334
+ [21] Satoshi Kosugi and Toshihiko Yamasaki. Unpaired image enhancement featuring reinforcement-learning-controlled image editing software. In AAAI, 2020. 1, 2
335
+ [22] Jingyuan Li, Ning Wang, Lefei Zhang, Bo Du, and Dacheng Tao. Recurrent feature reasoning for image inpainting. In CVPR, 2020. 2
336
+ [23] Xinyang Li, Shengchuan Zhang, Jie Hu, Liujuan Cao, Xiaopeng Hong, Xudong Mao, Feiyue Huang, Yongjian Wu, and Rongrong Ji. Image-to-image translation via hierarchical style disentanglement. In CVPR, 2021. 2
337
+ [24] Jie Liang, Hui Zeng, Miaomiao Cui, Xuansong Xie, and Lei Zhang. Ppr10k: A large-scale portrait photo retouching dataset with human-region mask and group-level consistency. In CVPR, 2021. 3
338
+ [25] Jie Liang, Hui Zeng, and Lei Zhang. High-resolution photorealistic image translation in real-time: A laplacian pyramid translation network. In CVPR, 2021. 1, 2, 3, 6, 7
339
+ [26] Liang Liao, Jing Xiao, Zheng Wang, Chia-Wen Lin, and Shin'ichi Satoh. Image inpainting guided by coherence priors of semantics and textures. In CVPR, 2021. 2
340
+ [27] Ji Lin, Richard Zhang, Frieder Ganz, Song Han, and JunYan Zhu. Anycost gans for interactive image synthesis and editing. In CVPR, 2021. 2
341
+ [28] Guilin Liu, Fitsum A Reda, Kevin J Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. In ECCV, 2018. 2
342
+ [29] Hongyu Liu, Bin Jiang, Yibing Song, Wei Huang, and Chao Yang. Rethinking image inpainting via a mutual encoder-decoder with feature equalizations. In ECCV, 2020. 2
343
+ [30] Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-to-image translation networks. In NeurIPS, 2017. 2
344
+ [31] Yang Liu, Jinshan Pan, and Zhixun Su. Deep blind image inpainting. In IScIDE, 2019. 2
345
+ [32] Zhihao Liu, Hui Yin, Yang Mi, Mengyang Pu, and Song Wang. Shadow removal by a lightness-guided network with training on unpaired data. TIP, 2021. 2, 3
346
+ [33] Zhihao Liu, Hui Yin, Xinyi Wu, Zhenyao Wu, Yang Mi, and Song Wang. From shadow generation to shadow removal. In CVPR, 2021. 2, 3
347
+ [34] Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 3DV, 2016. 6
348
+ [35] Kamyar Nazeri, Eric Ng, Tony Joseph, Faisal Qureshi, and Mehran Ebrahimi. Edgeconnect: Structure guided image inpainting using edge prediction. In ICCVW, 2019. 2
349
+ [36] Taesung Park, Alexei A Efros, Richard Zhang, and JunYan Zhu. Contrastive learning for unpaired image-to-image translation. In ECCV, 2020. 2
350
+ [37] Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu. Semantic image synthesis with spatially-adaptive normalization. In CVPR, 2019. 2
351
+
352
+ [38] Taesung Park, Jun-Yan Zhu, Oliver Wang, Jingwan Lu, Eli Shechtman, Alexei A Efros, and Richard Zhang. Swapping autoencoder for deep image manipulation. In NeurIPS, 2020. 2
353
+ [39] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016. 2
354
+ [40] Rui Qian, Robby T Tan, Wenhan Yang, Jiajun Su, and Jiaying Liu. Attentive generative adversarial network for raindrop removal from a single image. In CVPR, 2018. 2, 3
355
+ [41] Ruijie Quan, Xin Yu, Yuanzhi Liang, and Yi Yang. Removing raindrops and rain streaks in one go. In CVPR, 2021. 2, 3
356
+ [42] Dongwei Ren, Wangmeng Zuo, Qinghua Hu, Pengfei Zhu, and Deyu Meng. Progressive image deraining networks: A better and simpler baseline. In CVPR, 2019. 2, 3
357
+ [43] Elad Richardson, Yuval Alaluf, Or Patashnik, Yotam Nitzan, Yaniv Azar, Stav Shapiro, and Daniel Cohen-Or. Encoding in style: a stylegan encoder for image-to-image translation. In CVPR, 2021. 2
358
+ [44] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015. 4
359
+ [45] Artsiom Sanakoyeu, Dmytro Kotovenko, Sabine Lang, and Bjorn Ommer. A style-aware content loss for real-time hd style transfer. In ECCV, 2018. 2
360
+ [46] Alireza Shafaei, James J Little, and Mark Schmidt. Autoretouch: Automatic professional face retouching. In WACV, 2021. 1, 2, 6, 7
361
+ [47] Tamar Rott Shaham, Michael Gharbi, Richard Zhang, Eli Shechtman, and Tomer Michaeli. Spatially-adaptive pixelwise networks for fast image translation. In CVPR, 2021. 2, 3, 6, 7
362
+ [48] Hong Wang, Qi Xie, Qian Zhao, and Deyu Meng. A model-driven deep neural network for single image rain removal. In CVPR, 2020. 2, 3
363
+ [49] Hong Wang, Zongsheng Yue, Qi Xie, Qian Zhao, Yefeng Zheng, and Deyu Meng. From rain generation to rain removal. In CVPR, 2021. 2, 3
364
+ [50] Ruixing Wang, Qing Zhang, Chi-Wing Fu, Xiaoyong Shen, Wei-Shi Zheng, and Jiaya Jia. Underexposed photo enhancement using deep illumination estimation. In CVPR, 2019. 2
365
+ [51] Tengfei Wang, Hao Ouyang, and Qifeng Chen. Image inpainting with external-internal learning and monochromatic bottleneck. In CVPR, 2021. 2
366
+ [52] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In CVPR, 2018. 2, 3, 6, 7
367
+ [53] Yi Wang, Ying-Cong Chen, Xin Tao, and Jiaya Jia. Vcnet: A robust approach to blind image inpainting. In ECCV, 2020. 2, 6, 7
368
+ [54] Jiahui Yu, Zhe Lin, Jamei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Generative image inpainting with contextual attention. In CVPR, 2018. 2
369
+ [55] Jiahui Yu, Zhe Lin, Jamei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Free-form image inpainting with gated convolution. In ICCV, 2019. 2, 4
370
+
371
+ [56] Yingchen Yu, Fangneng Zhan, Shijian Lu, Jianxiong Pan, Feiying Ma, Xuansong Xie, and Chunyan Miao. Wavefill: A wavelet-based generation network for image inpainting. In ICCV, 2021. 2
372
+ [57] Hui Zeng, Jianrui Cai, Lida Li, Zisheng Cao, and Lei Zhang. Learning image-adaptive 3d lookup tables for high performance photo enhancement in real-time. TPAMI, 2020. 1, 2
373
+ [58] Yu Zeng, Zhe Lin, Huchuan Lu, and Vishal M Patel. Crfill: Generative image inpainting with auxiliary contextual reconstruction. In ICCV, 2021. 2
374
+ [59] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, 2017. 2
abpnadaptiveblendpyramidnetworkforrealtimelocalretouchingofultrahighresolutionphoto/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84996cff1fa09319120c421aa581ffe9c1bae6c80883f11f42c7ca337d3aac36
3
+ size 665462
abpnadaptiveblendpyramidnetworkforrealtimelocalretouchingofultrahighresolutionphoto/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:232ab857c2a2c8ce2b9afecd3379ba436f8e4c47258c13302d76373c7e4ff785
3
+ size 450794
acceleratingdetrconvergenceviasemanticalignedmatching/93bf8c77-36d2-40ed-b94e-4226b5a6b6a2_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ce45670004732644d146895ac8ad8e1f6cbbbe9d98415b51eb3f89f62c45118
3
+ size 73493
acceleratingdetrconvergenceviasemanticalignedmatching/93bf8c77-36d2-40ed-b94e-4226b5a6b6a2_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f72f3f24bad3916abf01c80ee010f1c5b7806978f7c8601c52571a43a8dd0bf2
3
+ size 92106
acceleratingdetrconvergenceviasemanticalignedmatching/93bf8c77-36d2-40ed-b94e-4226b5a6b6a2_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4289c81575cae1ea825131bc287617bfc27d2f930e0ef1ed8a9ac11ff448e74
3
+ size 5510511
acceleratingdetrconvergenceviasemanticalignedmatching/full.md ADDED
@@ -0,0 +1,273 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Accelerating DETR Convergence via Semantic-Aligned Matching
2
+
3
+ Gongjie Zhang $^{1}$ Zhipeng Luo $^{1,2}$ Yingchen Yu $^{1}$ Kaiwen Cui $^{1}$ Shijian Lu $^{*1}$ $^{1}$ Nanyang Technological University, Singapore $^{2}$ SenseTime Research
4
+
5
+ {gongjiezhang, shijian.lu}@ntu.edu.sg {zhipeng001, yingchen001, kaiwen001}@e.ntu.edu.sg
6
+
7
+ # Abstract
8
+
9
+ The recently developed DEetection Transformer (DETR) establishes a new object detection paradigm by eliminating a series of hand-crafted components. However, DETR suffers from extremely slow convergence, which increases the training cost significantly. We observe that the slow convergence is largely attributed to the complication in matching object queries with target features in different feature embedding spaces. This paper presents SAM-DETR, a Semantic-Aligned-Matching DETR that greatly accelerates DETR's convergence without sacrificing its accuracy. SAM-DETR addresses the convergence issue from two perspectives. First, it projects object queries into the same embedding space as encoded image features, where the matching can be accomplished efficiently with aligned semantics. Second, it explicitly searches salient points with the most discriminative features for semantic-aligned matching, which further speeds up the convergence and boosts detection accuracy as well. Being like a plug and play, SAM-DETR complements existing convergence solutions well yet only introduces slight computational overhead. Extensive experiments show that the proposed SAM-DETR achieves superior convergence as well as competitive detection accuracy. The implementation codes are publicly available at https://github.com/ZhangGongjie/SAM-DETR.
10
+
11
+ # 1. Introduction
12
+
13
+ Object detection is one of the most fundamental tasks in computer vision and has achieved unprecedented progress with the development of deep learning [27]. However, most object detectors often suffer from complex detection pipelines and sub-optimal performance due to their overreliance on hand-crafted components such as anchors, rule-based target assignment, and non-maximum suppression (NMS). The recently proposed DEtction TRansformer (DETR) [3] removes the need for such hand-designed components and establishes a fully end-to-end framework for
14
+
15
+ ![](images/0e74a7c22bef202c5f128b8777604f91f55abecb08aca548c0383fc34e40df2f.jpg)
16
+ Figure 1. Convergence curves of our proposed SAM-DETR and other detectors on COCO val 2017 under the 12-epoch training scheme. All competing methods are single-scale. SAM-DETR converges much faster than the original DETR, and can work in complementary with existing convergence-boosting solutions, reaching a comparable convergence speed with Faster R-CNN.
17
+
18
+ object detection. Despite its simple design and promising results, one of the most significant drawbacks of DETR is its extremely slow convergence on training, which requires 500 epochs to converge on the COCO benchmark [26], while Faster R-CNN [35] only takes $12\sim 36$ epochs instead. This slow convergence issue significantly increases the training cost and thus hinders its more comprehensive applications.
19
+
20
+ DETR employs a set of object queries in its decoder to detect target objects at different spatial locations. As shown in Fig. 2, in the cross-attention module, these object queries are trained with a set-based global loss to match the target objects and distill corresponding features from the matched regions for subsequent prediction. However, as pointed out in [10, 31, 63], each object query is almost equally matched to all spatial locations at initialization, thus
21
+
22
+ ![](images/7898d79a96ca0e130d4a9c7cbfb55a6fea85a370b4c6e9d71bcd74b87bc4c995.jpg)
23
+ Figure 2. The cross-attention module in DETR's decoder can be interpreted as a 'matching and feature distillation' process. Each object query first matches its own relevant regions in encoded image features, and then distills features from the matched regions, generating output for subsequent prediction.
24
+
25
+ requiring tedious training iterations to learn to focus on relevant regions. The matching difficulty between object queries and corresponding target features is the major reason for DETR's slow convergence.
26
+
27
+ A few recent works have been proposed to tackle the slow convergence issue of DETR. For example, Deformable DETR [63] replaces the original global dense attention with deformable attention that only attends to a small set of features to lower the complexity and speed up convergence. Conditional DETR [31] and SMCA-DETR [10] modify the cross-attention module to be spatially conditioned. In contrast, our approach works from a different perspective without modifying the attention mechanism.
28
+
29
+ Our core idea is to ease the matching process between object queries and their corresponding target features. One promising direction for matching has been defined by Siamese-based architecture, which aligns the semantics of both matching sides via two identical sub-networks to project them into the same embedding space. Its effectiveness has been demonstrated in various matching-involved vision tasks, such as object tracking [1, 4, 20, 21, 46, 47], re-identification [5, 37, 38, 48, 59], and few-shot recognition [15, 19, 39, 41, 55]. Motivated by this observation, we propose Semantic-Aligned-Matching DETR (SAM-DETR), which appends a plug-and-play module ahead of the crossattention module to semantically align object queries with encoded image features, thus facilitating the subsequent matching between them. This imposes a strong prior for object queries to focus on semantically similar regions in encoded image features. In addition, motivated by the importance of objects' keypoints and extremities in recognition and localization [3, 31, 62], we propose to explicitly
30
+
31
+ search multiple salient points and use them for semantic-aligned matching, which naturally fits in the DETR's original multi-head attention mechanism. Our approach only introduces a plug-and-play module into the original DETR while leaving most other operations unchanged. Therefore, the proposed method can be easily integrated with existing convergence solutions in a complementary manner.
32
+
33
+ In summary, the contributions of this work are fourfold. First, we propose Semantic-Aligned-Matching DETR (SAM-DETR), which significantly accelerates DETR's convergence by innovatively interpreting its cross-attention as a 'matching and distillation' process and semantically aligning object queries with encoded image features to facilitate their matching. Second, we propose to explicitly search for objects' salient points with the most discriminative features and feed them to the cross-attention module for semantic-aligned matching, which further boosts the detection accuracy and speeds up the convergence of our model. Third, experiments validate that our proposed SAM-DETR achieves significantly faster convergence compared with the original DETR. Fourth, as our approach only adds a plug-and-play module into the original DETR and leaves other operations mostly unchanged, the proposed SAM-DETR can be easily integrated with existing solutions that modify the attention mechanism to further improve DETR's convergence, leading to a comparable convergence speed with Faster R-CNN even within 12 training epochs.
34
+
35
+ # 2. Related Work
36
+
37
+ Object Detection. Modern object detection methods can be broadly classified into two categories: two-stage and single-stage detectors. Two-stage detectors mainly include Faster R-CNN [35] and its variants [2, 9, 16, 23, 32, 44, 49, 51, 54], which employ a Region Proposal Network (RPN) to generate region proposals and then make per-region predictions over them. Single-stage detectors [17, 28, 29, 33, 34, 43, 57, 61, 62] skip the proposal generation and directly perform object classification and localization over densely placed sliding windows (anchors) or object centers. However, most of these approaches still rely on many handcrafted components, such as anchor generation, rule-based training target assignment, and non-maximum suppression (NMS) post-processing, thus are not fully end-to-end.
38
+
39
+ Distinct from the detectors mentioned above, the recently proposed DETR [3] has established a new paradigm for object detection [50, 55, 56, 60, 63]. It employs a Transformer [45] encoder-decoder architecture and a set-based global loss to replace the hand-crafted components, achieving the first fully end-to-end object detector. However, DETR suffers from severe low convergence and requires extra-long training to reach good performance compared with those two-stage and single-stage detectors. Several works have been proposed to mitigate this issue: De
40
+
41
+ formable DETR [63] replaces the original dense attention with sparse deformable attention; Conditional DETR [31] and SMCA-DETR [10] propose conditioned cross-attention and Spatially Modulated Co-Attention (SMCA), respectively, to replace the cross-attention module in DETR's decoder, aiming to impose spatial constraints to the original cross-attention to better focus on prominent regions. In this work, we also aim to improve DETR's convergence, but from a different perspective. Our approach does not modify the original attention mechanism in DETR, thus can work in complementary with existing methods.
42
+
43
+ Siamese-based Architecture for Matching. Matching is a common concept in vision tasks, especially in contrastive tasks such as face recognition [36, 40], re-identification [5, 14,22,37,38,48,59], object tracking [1,4,8,11,20,21,42,46, 47,52,58,64], few-shot recognition [15, 19, 39, 41, 53, 55], etc. Its core idea is to predict the similarity between two inputs. Empirical results have shown that Siamese-based architectures, which project both matching sides into the same embedding space, perform exceptionally well on the tasks involving matching. Our work is motivated by this observation to interpret DETR's cross-attention as a 'matching and feature distillation' process. To achieve fast convergence, it is crucial to ensure the aligned semantics between object queries and encoded image features, i.e., both of them are projected into the same embedding space.
44
+
45
+ # 3. Proposed Method
46
+
47
+ In this section, we first review the basic architecture of DETR, and then introduce the architecture of our proposed Semantic-Aligned-Matching DETR (SAM-DETR). We also show how to integrate our approach with existing convergence solutions to boost DETR's convergence further. Finally, we present and analyze the visualization of a few examples to illustrate the mechanism of our approach and demonstrate its effectiveness.
48
+
49
+ # 3.1. A Review of DETR
50
+
51
+ DETR [3] formulates the task of object detection as a set prediction problem and addresses it with a Transformer [45] encoder-decoder architecture. Given an image $\mathbf{I} \in \mathbb{R}^{H_0 \times W_0 \times 3}$ , the backbone and the Transformer encoder produce the encoded image features $\mathbf{F} \in \mathbb{R}^{HW \times d}$ , where $d$ is the feature dimension, and $H_0$ , $W_0$ and $H$ , $W$ denote the spatial sizes of the image and the features, respectively. Then, the encoded image features $\mathbf{F}$ and a small set of object queries $\mathbf{Q} \in \mathbb{R}^{N \times d}$ are fed into the Transformer decoder to produce detection results, where $N$ is the number of object queries, typically $100 \sim 300$ .
52
+
53
+ In the Transformer decoder, object queries are sequentially processed by a self-attention module, a cross-attention module, and a feed-forward network (FFN) to produce the
54
+
55
+ outputs, which further go through a Multi-Layer Perceptron (MLP) to generate prediction results. A good way to interpret this process is: object queries denote potential objects at different spatial locations; the self-attention module performs message passing among different object queries; and in the cross-attention module, object queries first search for the corresponding regions to match, then distill relevant features from the matched regions for the subsequent predictions. The cross-attention mechanism is formulated as:
56
+
57
+ $$
58
+ \mathbf {Q} ^ {\prime} = \overbrace {\operatorname {S o f t m a x} \left(\frac {\left(\mathbf {Q W} _ {\mathrm {q}}\right) \left(\mathbf {F W} _ {\mathrm {k}}\right) ^ {\mathrm {T}}}{\sqrt {d}}\right) \left(\mathbf {F W} _ {\mathrm {v}}\right)} ^ {\text {t o m a t c h r e l e v a n t r e g i o n s}}, \tag {1}
59
+ $$
60
+
61
+ where $\mathbf{W}_{\mathrm{q}}$ , $\mathbf{W}_{\mathrm{k}}$ , and $\mathbf{W}_{\mathrm{v}}$ are the linear projections for query, key, and value in the attention mechanism. Ideally, the cross-attention module's output $\mathbf{Q}' \in \mathbb{R}^{N \times d}$ should contain relevant information distilled from the encoded image features to predict object classes and locations.
62
+
63
+ However, as pointed out in [10,31,63], the object queries are initially equally matched to all spatial locations in the encoded image features, and it is very challenging for the object queries to learn to focus on specific regions properly. The matching difficulty is the key reason that causes the slow convergence issue of DETR.
64
+
65
+ # 3.2. SAM-DETR
66
+
67
+ Our proposed SAM-DETR aims to relieve the difficulty of the matching process in Eq. 1 by semantically aligning object queries and encoded image features into the same embedding space, thus accelerating DETR's convergence. Its major difference from the original DETR [3] lies in the Transformer decoder layers. As illustrated in Fig. 3 (a), the proposed SAM-DETR appends a Semantics Aligner module ahead of the cross-attention module and models learnable reference boxes to facilitate the matching process. Same as DETR, the decoder layer is repeated six times, with zeros as input for the first layer and previous layer's outputs as input for subsequent layers.
68
+
69
+ The learnable reference boxes $\mathbf{R}_{\mathrm{box}} \in \mathbb{R}^{N \times 4}$ are modeled at the first decoder layer, representing the initial locations of the corresponding object queries. With the localization guidance of these reference boxes, the proposed Semantics Aligner takes the previous object query embeddings $\mathbf{Q}$ and the encoded image features $\mathbf{F}$ as inputs to generate new object query embeddings $\mathbf{Q}^{\mathrm{new}}$ and their position embeddings $\mathbf{Q}_{\mathrm{pos}}^{\mathrm{new}}$ , feeding to the subsequent cross-attention module. The generated embeddings $\mathbf{Q}^{\mathrm{new}}$ are enforced to lie in the same embedding space with the encoded image features $\mathbf{F}$ , which facilitates the subsequent matching process between them, making object queries able to quickly and properly attend to relevant regions in the encoded image features.
70
+
71
+ ![](images/26a78641a9708e4878bb93a570aa4e80de6bf7363b0d78f59e419faa59515dad.jpg)
72
+ Figure 3. The proposed Semantic-Aligned-Matching DETR (SAM-DETR) appends a Semantics Aligner into the Transformer decoder layer. (a) The architecture of one decoder layer in SAM-DETR. It models a learnable reference box for each object query, whose center location is used to generate corresponding position embeddings. With the guidance of the reference boxes, Semantics Aligner generates new object queries that are semantically aligned with the encoded image features, thus facilitating their subsequent matching. (b) The pipeline of the proposed Semantics Aligner. For simplicity, only one object query is illustrated. It first leverages the reference box to extract features from the corresponding region via RoIAAlign. The region features are then used to predict the coordinates of salient points with the most discriminative features. The salient points' features are then extracted as the new query embeddings with aligned semantics, which are further reweighted by previous query embeddings to incorporate useful information from them.
73
+
74
+ ![](images/d9aa21c55df11087acea2715997c16054b362a3958a5137df5733cf585b5bbbe.jpg)
75
+
76
+ # 3.2.1 Semantic-Aligned Matching
77
+
78
+ As shown in Eq. 1 and Fig. 2, the cross-attention module applies dot-product to object queries and encoded image features, producing attention weight maps indicating the matching between object queries and target regions. It is intuitive to use dot-product since it measures similarity between two vectors, encouraging object queries to have higher attention weights for more similar regions. However, the original DETR [3] does not enforce object queries and encoded image features being semantically aligned, i.e., projected into the same embedding space. Therefore, the object query embeddings are randomly projected to an embedding space at initialization, thus are almost equally matched to the encoded image features' all spatial locations. Consequently, extremely long training is needed to learn a meaningful matching between them.
79
+
80
+ With the above observation, the proposed Semantics Aligner designs a semantic alignment mechanism to ensure object query embeddings are in the same embedding space with encoded image features, which guarantees the
81
+
82
+ dot-product between them is a meaningful measurement of similarity. This is accomplished by resampling object queries from the encoded image features based on the reference boxes, as shown in Fig. 3(b). Given the encoded image features $\mathbf{F}$ and object queries' reference boxes $\mathbf{R}_{\mathrm{box}}$ , the Semantics Aligner first restores the spatial dimensions of the encoded image features from 1D sequences $HW \times d$ to 2D maps $H \times W \times d$ . Then, it applies RoIAlign [12] to extract region-level features $\mathbf{F}_{\mathrm{R}} \in \mathbb{R}^{N \times 7 \times 7 \times d}$ from the encoded image features. The new object queries $\mathbf{Q}_{\mathrm{pos}}^{\mathrm{new}}$ and $\mathbf{Q}_{\mathrm{pos}}^{\mathrm{new}}$ are then obtained via resampling from $\mathbf{F}_{\mathrm{R}}$ . More details are to be discussed in the ensuing subsection.
83
+
84
+ $$
85
+ \mathbf {F} _ {\mathrm {R}} = \operatorname {R o I A l i g n} (\mathbf {F}, \mathbf {R} _ {\text {b o x}}) \tag {2}
86
+ $$
87
+
88
+ $$
89
+ \mathbf {Q} ^ {\text {n e w}}, \mathbf {Q} _ {\text {p o s}} ^ {\text {n e w}} = \operatorname {R e s a m p l e} \left(\mathbf {F} _ {\mathrm {R}}, \mathbf {R} _ {\text {b o x}}, \mathbf {Q}\right) \tag {3}
90
+ $$
91
+
92
+ Since the resampling process does not involve any projection, the new object query embeddings $\mathbf{Q}^{\mathrm{new}}$ share the exact same embedding space with the encoded image features $\mathbf{F}$ , yielding a strong prior for object queries to focus on semantically similar regions.
93
+
94
+ # 3.2.2 Matching with Salient Point Features
95
+
96
+ Multi-head attention plays an indispensable role in DETR, which allows each head to focus on different parts and thus significantly strengthens its modeling capacity. Besides, prior works [3, 31, 62] have identified the importance of objects' most discriminative salient points in object detection. Inspired by these observations, instead of naively resampling by average-pooling or max-pooling, we propose to explicitly search for multiple salient points and employ their features for the aforementioned semantic-aligned matching. Such design naturally fits in the multi-head attention mechanism [45] without any modification.
97
+
98
+ Let us denote the number of attention heads as $M$ , which is typically set to 8. As shown in Fig. 3 (b), after retrieving region-level features $\mathbf{F}_{\mathrm{R}}$ via RoIAlign, we apply a ConvNet followed by a multi-layer perception (MLP) to predict $M$ coordinates $\mathbf{R}_{\mathrm{SP}} \in \mathbb{R}^{N \times M \times 2}$ for each region, representing the salient points that are crucial for recognizing and localizing the objects.
99
+
100
+ $$
101
+ \mathbf {R} _ {\mathrm {S P}} = \operatorname {M L P} \left(\operatorname {C o n v N e t} \left(\mathbf {F} _ {\mathrm {R}}\right)\right) \tag {4}
102
+ $$
103
+
104
+ It is worth noting that we constrain the predicted coordinates to be within the reference boxes. This design choice has been empirically verified in Section 4.3. Salient points' features are then sampled from $\mathbf{F}_{\mathrm{R}}$ via bilinear interpolation. The $M$ sampled feature vectors corresponding to the $M$ searched salient points are finally concatenated as the new object query embeddings, so that each attention head can focus on features from one salient point.
105
+
106
+ $$
107
+ \mathbf {Q} ^ {\text {n e w} \prime} = \operatorname {C o n c a t} \left(\left\{\mathbf {F} _ {\mathrm {R}} [ \dots , x, y, \dots ] \text {f o r} x, y \in \mathbf {R} _ {\mathrm {S P}} \right\}\right) \tag {5}
108
+ $$
109
+
110
+ The new object queries' position embeddings are generated using sinusoidal functions with salient points' image-scale coordinates as input. Similarly, position embeddings corresponding to $M$ salient points are also concatenated to feed to the subsequent multi-head cross-attention module.
111
+
112
+ $$
113
+ \mathbf {Q} _ {\text {p o s}} ^ {\text {n e w} \prime} = \operatorname {C o n c a t} \left(\operatorname {S i n u s o i d a l} \left(\mathbf {R} _ {\text {b o x}}, \mathbf {R} _ {\mathrm {S P}}\right)\right) \tag {6}
114
+ $$
115
+
116
+ # 3.2.3 Reweighting by Previous Query Embeddings
117
+
118
+ The Semantics Aligner effectively generates new object queries that are semantically aligned with encoded image features, but also brings one issue: previous query embeddings $\mathbf{Q}$ that contain valuable information for detection are not leveraged at all in the cross-attention module. To mitigate this issue, the proposed Semantics Aligner also takes previous query embeddings $\mathbf{Q}$ as inputs to generate reweighting coefficients via a linear projection followed by a sigmoid function. Through element-wise multiplication with the reweighting coefficients, both new query embeddings and their position embeddings are reweighted to highlight important features, thus effectively leveraging useful
119
+
120
+ information from previous query embeddings. This process can be formulated as:
121
+
122
+ $$
123
+ \mathbf {Q} ^ {\text {n e w}} = \mathbf {Q} ^ {\text {n e w} \prime} \otimes \sigma (\mathbf {Q W} _ {\mathrm {R W} 1}) \tag {7}
124
+ $$
125
+
126
+ $$
127
+ \mathbf {Q} _ {\text {p o s}} ^ {\text {n e w}} = \mathbf {Q} _ {\text {p o s}} ^ {\text {n e w} \prime} \otimes \sigma (\mathbf {Q W} _ {\mathrm {R W} 2}), \tag {8}
128
+ $$
129
+
130
+ where $\mathbf{W}_{\mathrm{RW1}}$ and $\mathbf{W}_{\mathrm{RW2}}$ denote linear projections, $\sigma(\cdot)$ denotes sigmoid function, and $\otimes$ denotes element-wise multiplication.
131
+
132
+ # 3.3. Compatibility with SMCA-DETR
133
+
134
+ As illustrated in Fig. 3(a), our proposed SAM-DETR only adds a plug-and-play module with slight computational overhead, leaving most other operations like the attention mechanism unchanged. Therefore, our approach can easily work with existing convergence solutions in a complementary manner to facilitate DETR's convergence further. We demonstrate the excellent compatibility of our approach by integrating it with SMCA-DETR [10], a state-of-the-art method to accelerate DETR's convergence.
135
+
136
+ SMCA-DETR [10] replaces the original cross-attention with Spatially Modulated Co-Attention (SMCA), which estimates the spatial locations of object queries and applies 2D-Gaussian weight maps to constrain the attention responses. In SMCA-DETR [10], both the center locations and the scales for the 2D-Gaussian weight maps are predicted from the object query embeddings. To integrate our proposed SAM-DETR with SMCA, we make slight modifications: we adopt the coordinates of $M$ salient points predicted by Semantics Aligner as the center locations for the 2D Gaussian-like weight maps, and simultaneously predict the scales of weight maps from pooled RoI features. Experimental results demonstrate the complementary effect between our proposed approach and SMCA-DETR [10].
137
+
138
+ # 3.4. Visualization and Analysis
139
+
140
+ Fig. 4 visualizes the salient points searched by the proposed Semantics Aligner, as well as their attention weight maps generated from the multi-head cross-attention module. We also compare them with the original DETR's attention weight maps. Both models are trained for 12 epochs with ResNet-50 [13] as their backbones.
141
+
142
+ It can be observed that the searched salient points mostly fall within the target objects and typically are the most distinctive locations that are crucial for object recognition and localization. This illustrates the effectiveness of our approach in searching salient features for the subsequent matching process. Besides, as shown in the attention weight maps from different heads, the sampled features from each salient point can effectively match target regions and narrow down the search range as reflected by the area of attention maps. Consequently, the model can effectively and efficiently attend to the extremities of the target objects as
143
+
144
+ ![](images/8bc8a462976141aa0aa78cf549e62b2f79e051323b3b9c40526fb0ea212eb112.jpg)
145
+ Figure 4. Visualization of SAM-DETR's searched salient points and their attention weight maps. The searched salient points mostly fall within the target objects and precisely indicate the locations with the most discriminative features for object recognition and localization. Compared with the original DETR, SAM-DETR's attention weight maps are more precise, demonstrating that our method effectively narrows down the search space for matching and facilitates convergence. In contrast, the original DETR's attention weight maps are more scattered, suggesting its inefficiency for matching relevant regions and distilling distinctive features.
146
+
147
+ shown in the overall attention maps, which greatly facilitates the convergence. In contrast, the attention maps generated from the original DETR are much more scattered, failing to locate the extremities efficiently and accurately. Such observation aligns with our motivation that the complication in matching object queries to target features is the primary reason for DETR's slow convergence. The visualization also proves the effectiveness of our proposed design in easing the matching difficulty via semantic-aligned matching and explicitly searched salient features.
148
+
149
+ # 4. Experiments
150
+
151
+ # 4.1. Experiment Setup
152
+
153
+ Dataset and Evaluation Metrics. We conduct experiments on the COCO 2017 dataset [26], which contains $\sim 117\mathrm{k}$ training images and $5\mathrm{k}$ validation images. Standard evaluation metrics for COCO are adopted to evaluate the performance of object detection.
154
+
155
+ Implementation Details. The implementation details of SAM-DETR mostly align with the original DETR [3]. We adopt ImageNet-pretrained [7] ResNet-50 [13] as the backbone, and train our model with $8 \times$ Nvidia V100 GPUs using the AdamW optimizer [18, 30]. The initial learning rate is set as $1 \times 10^{-5}$ for the backbone and $1 \times 10^{-4}$ for the Transformer encoder-decoder framework, with a weight decay of $1 \times 10^{-4}$ . The learning rate is decayed at a later stage by 0.1. The batch size is set to 16. When using ResNet-50 with dilations (R50-DC5), the batch size is 8. Model-architecture-related hyper-parameters stay the same with DETR, except we increase the number of object queries $N$ from 100 to 300, and replace cross-entropy loss for classification with sigmoid focal loss [25]. Both design changes align with the recent works to facilitate DETR's convergence [10, 31, 63].
156
+
157
+ We adopt the same data augmentation scheme as DETR [3], which includes horizontal flip, random crop, and random resize with the longest side at most 1333 pixels and the shortest side at least 480 pixels.
158
+
159
+ <table><tr><td>Method</td><td>multi-scale</td><td>#Epochs</td><td>#Params (M)</td><td>GFLOPs</td><td>AP</td><td>\( AP_{0.5} \)</td><td>\( AP_{0.75} \)</td><td>\( AP_S \)</td><td>\( AP_M \)</td><td>\( AP_L \)</td></tr><tr><td colspan="11">Baseline methods trained for long epochs:</td></tr><tr><td>Faster-RCNN-R50-DC5 [35]</td><td></td><td>108</td><td>166</td><td>320</td><td>41.1</td><td>61.4</td><td>44.3</td><td>22.9</td><td>45.9</td><td>55.0</td></tr><tr><td>Faster-RCNN-FPN-R50 [24,35]</td><td>✓</td><td>108</td><td>42</td><td>180</td><td>42.0</td><td>62.1</td><td>45.5</td><td>26.6</td><td>45.4</td><td>53.4</td></tr><tr><td>DETR-R50 [3]</td><td></td><td>500</td><td>41</td><td>86</td><td>42.0</td><td>62.4</td><td>44.2</td><td>20.5</td><td>45.8</td><td>61.1</td></tr><tr><td>DETR-R50-DC5 [3]</td><td></td><td>500</td><td>41</td><td>187</td><td>43.3</td><td>63.1</td><td>45.9</td><td>22.5</td><td>47.3</td><td>61.1</td></tr><tr><td colspan="11">Comparison of SAM-DETR with other detectors under shorter training schemes:</td></tr><tr><td>Faster-RCNN-R50 [35]</td><td></td><td>12</td><td>34</td><td>547</td><td>35.7</td><td>56.1</td><td>38.0</td><td>19.2</td><td>40.9</td><td>48.7</td></tr><tr><td>DETR-R50 [3]‡</td><td></td><td>12</td><td>41</td><td>86</td><td>22.3</td><td>39.5</td><td>22.2</td><td>6.6</td><td>22.8</td><td>36.6</td></tr><tr><td>Deformable-DETR-R50 [63]</td><td></td><td>12</td><td>34</td><td>78</td><td>31.8</td><td>51.4</td><td>33.5</td><td>15.0</td><td>35.7</td><td>44.7</td></tr><tr><td>Conditional-DETR-R50 [31]</td><td></td><td>12</td><td>44</td><td>90</td><td>32.2</td><td>52.1</td><td>33.4</td><td>13.9</td><td>34.5</td><td>48.7</td></tr><tr><td>SMCA-DETR-R50 [10]</td><td></td><td>12</td><td>42</td><td>86</td><td>31.6</td><td>51.7</td><td>33.1</td><td>14.1</td><td>34.4</td><td>46.5</td></tr><tr><td>SAM-DETR-R50 (Ours)</td><td></td><td>12</td><td>58</td><td>100</td><td>33.1</td><td>54.2</td><td>33.7</td><td>13.9</td><td>36.5</td><td>51.7</td></tr><tr><td>SAM-DETR-R50 w/ SMCA (Ours)</td><td></td><td>12</td><td>58</td><td>100</td><td>36.0</td><td>56.8</td><td>37.3</td><td>15.8</td><td>39.4</td><td>55.3</td></tr><tr><td>Faster-RCNN-R50-DC5 [35]</td><td></td><td>12</td><td>166</td><td>320</td><td>37.3</td><td>58.8</td><td>39.7</td><td>20.1</td><td>41.7</td><td>50.0</td></tr><tr><td>DETR-R50-DC5 [3]‡</td><td></td><td>12</td><td>41</td><td>187</td><td>25.9</td><td>44.4</td><td>26.0</td><td>7.9</td><td>27.1</td><td>41.4</td></tr><tr><td>Deformable-DETR-R50-DC5 [63]</td><td></td><td>12</td><td>34</td><td>128</td><td>34.9</td><td>54.3</td><td>37.6</td><td>19.0</td><td>38.9</td><td>47.5</td></tr><tr><td>Conditional-DETR-R50-DC5 [31]</td><td></td><td>12</td><td>44</td><td>195</td><td>35.9</td><td>55.8</td><td>38.2</td><td>17.8</td><td>38.8</td><td>52.0</td></tr><tr><td>SMCA-DETR-R50-DC5 [10]</td><td></td><td>12</td><td>42</td><td>187</td><td>32.5</td><td>52.8</td><td>33.9</td><td>14.2</td><td>35.4</td><td>48.1</td></tr><tr><td>SAM-DETR-R50-DC5 (Ours)</td><td></td><td>12</td><td>58</td><td>210</td><td>38.3</td><td>59.1</td><td>40.1</td><td>21.0</td><td>41.8</td><td>55.2</td></tr><tr><td>SAM-DETR-R50-DC5 w/ SMCA (Ours)</td><td></td><td>12</td><td>58</td><td>210</td><td>40.6</td><td>61.1</td><td>42.8</td><td>21.9</td><td>43.9</td><td>58.5</td></tr><tr><td>Faster-RCNN-R50 [35]</td><td></td><td>36</td><td>34</td><td>547</td><td>38.4</td><td>58.7</td><td>41.3</td><td>20.7</td><td>42.7</td><td>53.1</td></tr><tr><td>DETR-R50 [3]‡</td><td></td><td>50</td><td>41</td><td>86</td><td>34.9</td><td>55.5</td><td>36.0</td><td>14.4</td><td>37.2</td><td>54.5</td></tr><tr><td>Deformable-DETR-R50 [63]</td><td></td><td>50</td><td>34</td><td>78</td><td>39.4</td><td>59.6</td><td>42.3</td><td>20.6</td><td>43.0</td><td>55.5</td></tr><tr><td>Conditional-DETR-R50 [31]</td><td></td><td>50</td><td>44</td><td>90</td><td>40.9</td><td>61.8</td><td>43.3</td><td>20.8</td><td>44.6</td><td>59.2</td></tr><tr><td>SMCA-DETR-R50 [10]</td><td></td><td>50</td><td>42</td><td>86</td><td>41.0</td><td>-</td><td>-</td><td>21.9</td><td>44.3</td><td>59.1</td></tr><tr><td>SAM-DETR-R50 (Ours)</td><td></td><td>50</td><td>58</td><td>100</td><td>39.8</td><td>61.8</td><td>41.6</td><td>20.5</td><td>43.4</td><td>59.6</td></tr><tr><td>SAM-DETR-R50 w/ SMCA (Ours)</td><td></td><td>50</td><td>58</td><td>100</td><td>41.8</td><td>63.2</td><td>43.9</td><td>22.1</td><td>45.9</td><td>60.9</td></tr><tr><td>Deformable-DETR-R50 [63]</td><td>✓</td><td>50</td><td>40</td><td>173</td><td>43.8</td><td>62.6</td><td>47.7</td><td>26.4</td><td>47.1</td><td>58.0</td></tr><tr><td>SMCA-DETR-R50 [10]</td><td>✓</td><td>50</td><td>40</td><td>152</td><td>43.7</td><td>63.6</td><td>47.2</td><td>24.2</td><td>47.0</td><td>60.4</td></tr><tr><td>Faster-RCNN-R50-DC5 [35]</td><td></td><td>36</td><td>166</td><td>320</td><td>39.0</td><td>60.5</td><td>42.3</td><td>21.4</td><td>43.5</td><td>52.5</td></tr><tr><td>DETR-R50-DC5 [3]‡</td><td></td><td>50</td><td>41</td><td>187</td><td>36.7</td><td>57.6</td><td>38.2</td><td>15.4</td><td>39.8</td><td>56.3</td></tr><tr><td>Deformable-DETR-R50-DC5 [63]</td><td></td><td>50</td><td>34</td><td>128</td><td>41.5</td><td>61.8</td><td>44.9</td><td>24.1</td><td>45.3</td><td>56.0</td></tr><tr><td>Conditional-DETR-R50-DC5 [31]</td><td></td><td>50</td><td>44</td><td>195</td><td>43.8</td><td>64.4</td><td>46.7</td><td>24.0</td><td>47.6</td><td>60.7</td></tr><tr><td>SAM-DETR-R50-DC5 (Ours)</td><td></td><td>50</td><td>58</td><td>210</td><td>43.3</td><td>64.4</td><td>46.2</td><td>25.1</td><td>46.9</td><td>61.0</td></tr><tr><td>SAM-DETR-R50-DC5 w/ SMCA (Ours)</td><td></td><td>50</td><td>58</td><td>210</td><td>45.0</td><td>65.4</td><td>47.9</td><td>26.2</td><td>49.0</td><td>63.3</td></tr><tr><td colspan="11">Accelerating DETR&#x27;s convergence with self-supervised learning:</td></tr><tr><td>UP-DETR-R50 [6]</td><td></td><td>150</td><td>41</td><td>86</td><td>40.5</td><td>60.8</td><td>42.6</td><td>19.0</td><td>44.4</td><td>60.0</td></tr><tr><td>UP-DETR-R50 [6]</td><td></td><td>300</td><td>41</td><td>86</td><td>42.8</td><td>63.0</td><td>45.3</td><td>20.8</td><td>47.1</td><td>61.7</td></tr></table>
160
+
161
+ Table 1. Comparison of the proposed SAM-DETR, other DETR-like detectors, and Faster R-CNN on COCO 2017 val set. $\ddagger$ denotes the original DETR [3] with aligned setups, including increased number of object queries (100→300) and focal loss for classification.
162
+
163
+ We adopt two training schemes for experiments, which include a 12-epoch scheme where the learning rate decays after 10 epochs, as well as a 50-epoch scheme where the learning rate decays after 40 epochs.
164
+
165
+ # 4.2. Experiment Results
166
+
167
+ Table 1 presents a thorough comparison of the proposed SAM-DETR, other DETR-like detectors [3, 6, 10, 31, 63], and Faster R-CNN [35]. As shown, Faster R-CNN and DETR can both achieve impressive performance when trained for long epochs. However, when trained for only
168
+
169
+ 12 epochs, Faster R-CNN still achieves good performance, while DETR performs substantially worse due to its slow convergence. Several recent works [10, 31, 63] modify the original attention mechanism and effectively boost DETR's performance under the 12-epoch training scheme, but still have large gaps compared with the strong Faster R-CNN baseline. For standalone usage, our proposed SAM-DETR can achieve a significant performance gain compared with the original DETR baseline $(+10.8\%)$ AP) and outperform all DETR's variants [10, 31, 63]. Furthermore, the proposed SAM-DETR can be easily integrated with existing
170
+
171
+ <table><tr><td rowspan="2">SAM</td><td colspan="4">Query Resampling Strategy</td><td rowspan="2">RW</td><td rowspan="2">AP</td><td rowspan="2">\( AP_{0.5} \)</td><td rowspan="2">\( AP_{0.75} \)</td></tr><tr><td>Avg</td><td>Max</td><td>SP x1</td><td>SP x8</td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td>22.3</td><td>39.5</td><td>22.2</td></tr><tr><td>✓</td><td>✓</td><td></td><td></td><td></td><td></td><td>25.2</td><td>48.9</td><td>23.3</td></tr><tr><td>✓</td><td></td><td>✓</td><td></td><td></td><td></td><td>27.0</td><td>50.2</td><td>25.8</td></tr><tr><td>✓</td><td></td><td></td><td>✓</td><td></td><td></td><td>28.6</td><td>50.3</td><td>28.1</td></tr><tr><td>✓</td><td></td><td></td><td>✓</td><td></td><td>✓</td><td>30.3</td><td>52.0</td><td>29.8</td></tr><tr><td>✓</td><td></td><td></td><td></td><td>✓</td><td></td><td>32.0</td><td>53.4</td><td>32.8</td></tr><tr><td>✓</td><td></td><td></td><td></td><td>✓</td><td>✓</td><td>33.1</td><td>54.2</td><td>33.7</td></tr></table>
172
+
173
+ Table 2. Ablation studies on our proposed design choices. Results are obtained on COCO val 2017. 'SAM' denotes the proposed Semantic-Aligned Matching. 'RW' denotes reweighting by previous query embeddings. Different resampling strategies for SAM are studied, including average-pooling (Avg), max-pooling (Max), one salient point (SP x1), and eight salient points (SP x8).
174
+
175
+ <table><tr><td colspan="2">Salient Point Search Range</td><td rowspan="2">AP</td><td rowspan="2">AP0.5</td><td rowspan="2">AP0.75</td></tr><tr><td>within ref box</td><td>within image</td></tr><tr><td>✓</td><td></td><td>33.1</td><td>54.2</td><td>33.7</td></tr><tr><td></td><td>✓</td><td>30.0</td><td>52.3</td><td>29.2</td></tr></table>
176
+
177
+ Table 3. Ablation study on the salient point search range. Results are obtained on COCO val 2017.
178
+
179
+ convergence-boosting methods for DETR to achieve even better performance. Combining our proposed SAM-DETR with SMCA [10] brings an improvement of $+2.9\%$ AP compared with the standalone SAM-DETR, and $+4.4\%$ AP compared with SMCA-DETR [10], leading to performance on par with Faster R-CNN within 12 epochs. The convergence curves of the competing methods under the 12-epoch scheme are also presented in Fig. 1.
180
+
181
+ We also conduct experiments with a stronger backbone R50-DC5 and with a longer 50-epoch training scheme. Under various setups, the proposed SAM-DETR consistently improves the original DETR's performance and achieves state-of-the-art accuracy when further integrated with SMCA [10]. The superior performance under various setups demonstrates the effectiveness of our approach.
182
+
183
+ # 4.3. Ablation Study
184
+
185
+ We conduct ablation studies to validate the effectiveness of our proposed designs. Experiments are performed with ResNet-50 [13] under the 12-epoch training scheme.
186
+
187
+ Effect of Semantic-Aligned Matching (SAM). As shown in Table 2, the proposed SAM, together with any query resampling strategy, consistently achieves superior performance than the baseline. We highlight that even with the naive max-pooling resampling, $\mathrm{AP}_{0.5}$ improves by $10.7\%$ , a considerable margin. The results strongly support our claim that SAM effectively eases the complication in matching object queries to their corresponding target features, thus accelerating DETR's convergence.
188
+
189
+ Effect of Searching Salient Points. As shown in Table 2, different query resampling strategies lead to large variance in detection accuracy. Max-pooling performs better than average-pooling, suggesting that detection relies more on key features rather than treating all features equally. This motivates us to explicitly search salient points and use their features for semantic-aligned matching. Results show that searching just one salient point and resampling its features as new object queries outperforms the naive resampling strategies. Furthermore, sampling multiple salient points can naturally work with the multi-head attention mechanism, further strengthening the representation capability of the new object queries and boosting performance.
190
+
191
+ # Searching within Boxes vs. Searching within Images.
192
+
193
+ As introduced in Section 3.2.2, salient points are searched within the corresponding reference boxes. As shown in Table 3, searching salient points at the image scale (allowing salient points outside their reference boxes) degrades the performance. We suspect the performance drop is due to increased difficulty for matching with a larger search space. It is noteworthy that the original DETR's object queries do not have explicit search ranges, while our proposed SAM-DETR models learnable reference boxes with interpretable meanings, which effectively narrows down the search space, resulting in accelerated convergence.
194
+
195
+ Effect of Reweighting by Previous Embeddings. We believe previous object queries' embeddings contain helpful information for detection that should be effectively leveraged in the matching process. To this end, we predict a set of reweighting coefficients from previous query embeddings to apply to the newly generated object queries, highlighting critical features. As shown in Table 2, the proposed reweighting consistently boosts performance, indicating effective usage of knowledge from previous object queries.
196
+
197
+ # 4.4. Limitation
198
+
199
+ Compared with Faster R-CNN [35], SAM-DETR inherits from DETR [3] superior accuracy on large objects and degraded performance on small objects. One way to improve accuracy on small objects is to leverage multi-scale features, which we will explore in the future.
200
+
201
+ # 5. Conclusion
202
+
203
+ This paper proposes SAM-DETR to accelerate DETR's convergence. At the core of SAM-DETR is a plug-and-play module that semantically aligns object queries and encoded image features to facilitate the matching between them. It also explicitly searches salient point features for semantic-aligned matching. The proposed SAM-DETR can be easily integrated with existing convergence solutions to boost performance further, leading to a comparable accuracy with Faster R-CNN within 12 training epochs. We hope our work paves the way for more comprehensive research and applications of DETR.
204
+
205
+ # References
206
+
207
+ [1] Luca Bertinetto, Jack Valmadre, Joao F Henriques, Andrea Vedaldi, and Philip HS Torr. Fully-convolutional siamese networks for object tracking. In ECCV, 2016. 2, 3
208
+ [2] Zhaowei Cai and Nuno Vasconcelos. Cascade R-CNN: Delving into high quality object detection. In CVPR, 2018. 2
209
+ [3] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, 2020. 1, 2, 3, 4, 5, 6, 7, 8
210
+ [4] Xin Chen, Bin Yan, Jiawen Zhu, Dong Wang, Xiaoyun Yang, and Hutchuan Lu. Transformer tracking. In CVPR, 2021. 2, 3
211
+ [5] Dahjung Chung, Khalid Tahboub, and Edward J Delp. A two stream siamese convolutional neural network for person re-identification. In ICCV, 2017. 2, 3
212
+ [6] Zhigang Dai, Bolun Cai, Yugeng Lin, and Junying Chen. UP-DETR: Unsupervised pre-training for object detection with transformers. In CVPR, 2021. 7
213
+ [7] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, 2009. 6
214
+ [8] Xingping Dong and Jianbing Shen. Triplet loss in siamese network for object tracking. In ECCV, 2018. 3
215
+ [9] Qi Fan, Wei Zhuo, Chi-Keung Tang, and Yu-Wing Tai. Few-shot object detection with attention-RPN and multi-relation detector. In CVPR, 2020. 2
216
+ [10] Peng Gao, Minghang Zheng, Xiaogang Wang, Jifeng Dai, and Hongsheng Li. Fast convergence of DETR with spatially modulated co-attention. In ICCV, 2021. 1, 2, 3, 5, 6, 7, 8
217
+ [11] Anfeng He, Chong Luo, Xinmei Tian, and Wenjun Zeng. A twofold siamese network for real-time object tracking. In CVPR, 2018. 3
218
+ [12] Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross Girshick. Mask R-CNN. In ICCV, 2017. 4
219
+ [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. 5, 6, 8
220
+ [14] Shuting He, Hao Luo, Pichao Wang, Fan Wang, Hao Li, and Wei Jiang. TransReID: Transformer-based object re-identification. In ICCV, 2021. 3
221
+ [15] Ting-I Hsieh, Yi-Chen Lo, Hwann-Tzong Chen, and Tyng-Luh Liu. One-shot object detection with co-attention and co-excitation. In NeurIPS, 2019. 2, 3
222
+ [16] Han Hu, Jiayuan Gu, Zheng Zhang, Jifeng Dai, and Yichen Wei. Relation networks for object detection. In CVPR, 2018. 2
223
+ [17] Bingyi Kang, Zhuang Liu, Xin Wang, Fisher Yu, Jiashi Feng, and Trevor Darrell. Few-shot object detection via feature reweighting. In ICCV, 2019. 2
224
+ [18] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. 6
225
+ [19] Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural networks for one-shot image recognition. In ICML Deep Learning Workshop, 2015. 2, 3
226
+
227
+ [20] Bo Li, Wei Wu, Qiang Wang, Fangyi Zhang, Junliang Xing, and Junjie Yan. SiamRPN++: Evolution of siamese visual tracking with very deep networks. In CVPR, 2019. 2, 3
228
+ [21] Bo Li, Junjie Yan, Wei Wu, Zheng Zhu, and Xiaolin Hu. High performance visual tracking with siamese region proposal network. In CVPR, 2018. 2, 3
229
+ [22] Yulin Li, Jianfeng He, Tianzhu Zhang, Xiang Liu, Yongdong Zhang, and Feng Wu. Diverse part discovery: Occluded person re-identification with part-aware transformer. In CVPR, 2021. 3
230
+ [23] Minghui Liao, Pengyuan Lyu, Minghang He, Cong Yao, Wenhao Wu, and Xiang Bai. Mask TextSpotter: An end-to-end trainable neural network for spotting text with arbitrary shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(2):532-548, 2021. 2
231
+ [24] Tsung-Yi Lin, Piotr Dólar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In CVPR, 2017. 7
232
+ [25] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dólár. Focal loss for dense object detection. In ICCV, 2017. 6
233
+ [26] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C. Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014. 1, 6
234
+ [27] Li Liu, Wanli Ouyang, Xiaogang Wang, Paul Fieguth, Jie Chen, Xinwang Liu, and Matti Pietikainen. Deep learning for generic object detection: A survey. International Journal of Computer Vision, 128:261-318, 2020. 1
235
+ [28] Songtao Liu, Di Huang, and Yunhong Wang. Receptive field block net for accurate and fast object detection. In ECCV, 2018. 2
236
+ [29] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. SSD: Single shot multibox detector. In ECCV, 2016. 2
237
+ [30] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In ICLR, 2019. 6
238
+ [31] Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, and Jingdong Wang. Conditional DETR for fast training convergence. In ICCV, 2021. 1, 2, 3, 5, 6, 7
239
+ [32] Jiangmiao Pang, Kai Chen, Jianping Shi, Huajun Feng, Wanli Ouyang, and Dahua Lin. Libra R-CNN: Towards balanced learning for object detection. In CVPR, 2019. 2
240
+ [33] Juan-Manuel Perez-Rua, Xiatian Zhu, Timothy M Hospedales, and Tao Xiang. Incremental few-shot object detection. In CVPR, 2020. 2
241
+ [34] Joseph Redmon and Ali Farhadi. YOLO 9000: Better, faster, stronger. In CVPR, 2017. 2
242
+ [35] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In NeurIPS, 2015. 1, 2, 7, 8
243
+ [36] Florian Schroff, Dmitry Kalenichenko, and James Philbin. FaceNet: A unified embedding for face recognition and clustering. In CVPR, 2015. 3
244
+
245
+ [37] Chen Shen, Zhongming Jin, Yiru Zhao, Zhihang Fu, Rongxin Jiang, Yaowu Chen, and Xian-Sheng Hua. Deep siamese network with multi-level similarity perception for person re-identification. In ACM MM, 2017. 2, 3
246
+ [38] Yantao Shen, Tong Xiao, Hongsheng Li, Shuai Yi, and Xiaogang Wang. Learning deep neural networks for vehicle Re-ID with visual-spatio-temporal path proposals. In ICCV, 2017. 2, 3
247
+ [39] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In NeurIPS, 2017. 2, 3
248
+ [40] Lingxue Song, Dihong Gong, Zhifeng Li, Changsong Liu, and Wei Liu. Occlusion robust face recognition based on mask learning with pairwise differential siamese network. In ICCV, 2019. 3
249
+ [41] Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: Relation network for few-shot learning. In CVPR, 2018. 2, 3
250
+ [42] Ran Tao, Efstratos Gavves, and Arnold WM Smeulders. Siamese instance search for tracking. In CVPR, 2016. 3
251
+ [43] Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. FCOS: Fully convolutional one-stage object detection. In ICCV, 2019. 2
252
+ [44] Lachlan Tychsen-Smith and Lars Petersson. Improving object localization with fitness NMS and bounded IoU loss. In CVPR, 2018. 2
253
+ [45] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, L. Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017. 2, 3, 5
254
+ [46] Paul Voigtaender, Jonathon Luiten, Philip HS Torr, and Bastian Leibe. Siam R-CNN: Visual tracking by re-detection. In CVPR, 2020. 2, 3
255
+ [47] Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li. Transformer meets tracker: Exploiting temporal context for robust visual tracking. In CVPR, 2021. 2, 3
256
+ [48] Lin Wu, Yang Wang, Junbin Gao, and Xue Li. Where-and-when to look: Deep siamese attention networks for videobased person re-identification. IEEE Transactions on Multimedia, 21(6):1412-1424, 2018. 2, 3
257
+ [49] Yang Xiao and Renaud Marlet. Few-shot object detection and viewpoint estimation for objects in the wild. In ECCV, 2020. 2
258
+ [50] Chuhui Xue, Shijian Lu, Song Bai, Wenqing Zhang, and Changhu Wang. I2C2W: Image-to-character-to-word transformers for accurate scene text recognition. arXiv preprint arXiv:2105.08383, 2021. 2
259
+ [51] Xiaopeng Yan, Ziliang Chen, Anni Xu, Xiaoxi Wang, Xiaodan Liang, and Liang Lin. Meta R-CNN: Towards general solver for instance-level low-shot learning. In ICCV, 2019. 2
260
+ [52] Fangao Zeng, Bin Dong, Tiancai Wang, Xiangyu Zhang, and Yichen Wei. MOTR: End-to-end Multiple-Object tracking with TRansformer. arXiv preprint arXiv:2105.03247, 2021. 3
261
+ [53] Gongjie Zhang, Kaiwen Cui, Rongliang Wu, Shijian Lu, and Yonghong Tian. PNPDet: Efficient few-shot detection without forgetting via plug-and-play sub-networks. In WACV, 2021. 3
262
+
263
+ [54] Gongjie Zhang, Shijian Lu, and Wei Zhang. CAD-Net: A context-aware detection network for objects in remote sensing imagery. IEEE Transactions on Geoscience and Remote Sensing, 57(12):10015–10024, 2019. 2
264
+ [55] Gongjie Zhang, Zhipeng Luo, Kaiwen Cui, and Shijian Lu. Meta-DETR: Image-level few-shot object detection with inter-class correlation exploitation. arXiv preprint arXiv:2103.11731, 2021. 2, 3
265
+ [56] Jingyi Zhang, Jiaxing Huang, Zhipeng Luo, Gongjie Zhang, and Shijian Lu. DA-DETR: Domain adaptive detection transformer by hybrid attention. arXiv preprint arXiv:2103.17084, 2021. 2
266
+ [57] Shifeng Zhang, Longyin Wen, Xiao Bian, Zhen Lei, and Stan Z Li. Single-shot refinement neural network for object detection. In CVPR, 2018. 2
267
+ [58] Zhipeng Zhang and Houwen Peng. Deeper and wider siamese networks for real-time visual tracking. In CVPR, 2019. 3
268
+ [59] Meng Zheng, Srikrishna Karanam, Ziyan Wu, and Richard J Radke. Re-identification with consistent attentive siamese networks. In CVPR, 2019. 2, 3
269
+ [60] Changqing Zhou, Zhipeng Luo, Yueru Luo, Tianrui Liu, Liang Pan, Zhongang Cai, Haiyu Zhao, and Shijian Lu. PTTR: Relational 3D point cloud object tracking with transformer. In CVPR, 2022. 2
270
+ [61] Xingyi Zhou, Dequan Wang, and Philipp Krahenbuhl. Objects as points. In arXiv preprint arXiv:1904.07850, 2019. 2
271
+ [62] Xingyi Zhou, Jiacheng Zhuo, and Philipp Krahenbuhl. Bottom-up object detection by grouping extreme and center points. In CVPR, 2019. 2, 5
272
+ [63] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable DETR: Deformable transformers for end-to-end object detection. In ICLR, 2021. 1, 2, 3, 6, 7
273
+ [64] Zheng Zhu, Qiang Wang, Bo Li, Wei Wu, Junjie Yan, and Weiming Hu. Distractor-aware siamese networks for visual object tracking. In ECCV, 2018. 3
acceleratingdetrconvergenceviasemanticalignedmatching/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bebbb493db91593dfa02175add65fd0b8ceed738795399308d097d4db51e6ed2
3
+ size 787554
acceleratingdetrconvergenceviasemanticalignedmatching/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06c80ee7fb298c0fc4c4d7424c19237511676452d4896f6be177f907e81fd8a0
3
+ size 335526
acceleratingvideoobjectsegmentationwithcompressedvideo/8ac3eeea-b648-47f2-8829-27ee10c0ec1a_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5269f934dfe9598ae6a430f4357dae20a7fd663841972d80469f26e11cb3e620
3
+ size 77088
acceleratingvideoobjectsegmentationwithcompressedvideo/8ac3eeea-b648-47f2-8829-27ee10c0ec1a_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:686a2bd7300646158a3c82d9ed9d6900eea0d091871d4ec9b5d1dfdfeff74223
3
+ size 94591
acceleratingvideoobjectsegmentationwithcompressedvideo/8ac3eeea-b648-47f2-8829-27ee10c0ec1a_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d3df016f4d945b390c181824fd17cb99689d8bf4c4168a77058d79c1ab5930b
3
+ size 2951778
acceleratingvideoobjectsegmentationwithcompressedvideo/full.md ADDED
@@ -0,0 +1,309 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Accelerating Video Object Segmentation with Compressed Video
2
+
3
+ Kai Xu Angela Yao National University of Singapore {kxu,ayao}@comp.nus.edu.sg
4
+
5
+ # Abstract
6
+
7
+ We propose an efficient plug-and-play acceleration framework for semi-supervised video object segmentation by exploiting the temporal redundancies in videos presented by the compressed bitstream. Specifically, we propose a motion vector-based warping method for propagating segmentation masks from keyframes to other frames in a bidirectional and multi-hop manner. Additionally, we introduce a residual-based correction module that can fix wrongly propagated segmentation masks from noisy or erroneous motion vectors. Our approach is flexible and can be added on top of several existing video object segmentation algorithms. We achieved highly competitive results on DAVIS17 and YouTube-VOS on various base models with substantial speed-ups of up to $3.5X$ with minor drops in accuracy.
8
+
9
+ # 1. Introduction
10
+
11
+ Video object segmentation (VOS) aims to obtain pixel-level masks of the objects in a video sequence. State-of-the-art methods [17,18,20,23] are highly accurate at segmenting the objects, but they can be slow, requiring as much as 0.2 seconds [20] to segment a frame. More efficient methods [3, 27, 37] typically trade off accuracy for speed.
12
+
13
+ To minimize this trade-off, we propose to leverage compressed videos for accelerating video object segmentation. Most videos on the internet today are stored and transmitted in a compressed format. Video compression encoders take a sequence of raw images as input and exploit the inherent spatial and temporal redundancies to compress the size by several magnitudes [14]. The encoding gives several sources of "free" information for VOS. Firstly, the bitstream's frame type (I- vs. P-/B-frames) gives some indication for keyframes, as the encoder separates the frames according to their information content. Secondly, the motion compensation scheme used in compression provides motion vectors that serve as a cheap approximation to optical flow.
14
+
15
+ ![](images/64fcfb430d0175a18d59f7d948ed0e6bfd5446c80fe0c48508b581b350618501.jpg)
16
+ Figure 1. Comparison of VOS methods on the DAVIS 17 dataset. We double the speed of STM, MiVOS, and STCN with minor drops in accuracy. The other compressed video method CVOS [32] achieves comparable speed but has a significant drop in accuracy.
17
+
18
+ Finally, the residuals give a strong indicator of problematic areas that may require refinement.
19
+
20
+ We aim to develop an accurate yet efficient VOS acceleration framework. As our interest is in acceleration, it is natural to follow a propagation-based approach in which an (heavy) off-the-shelf base network is applied to only keyframes. Acceleration is then achieved by propagating the keyframe segmentations and features to non-keyframes. In our framework, we leverage the information from the compressed video bitstream, specifically, the motion vectors and residuals, which are ideal for an efficient yet accurate propagation scheme.
21
+
22
+ Motion vectors are cheap to obtain – they simply need to be read out from the bitstream. However, they are also more challenging to work with than optical flow. Whereas optical flow fields are dense and defined on a pixel-wise basis, motion vectors are sparse. For example in HEVC [30], they are defined only for blocks of pixels, which greatly reduces the resolution of the motion information and introduces block
23
+
24
+ artifacts. Furthermore, in cases where the coding bitrate limit is too low, the encoder may not estimate the motion correctly; this often happens in complex scenes or under fast motions. As such, we propose a dedicated soft propagation module that suppresses noise. For further improvement, we also propose a mask correction module based on the bitstream residuals. Putting all of this together, we designed a new plug-and-play framework based on compressed videos to accelerate standard VOS methods [4, 5, 20]. We use these off-the-shelf methods as base networks to segment keyframes and then leverage the compressed videos' motion vectors for propagation and residuals for correction.
25
+
26
+ A key distinction between our motion vector propagation module and existing optical flow propagation methods [19,22, 23, 45] is that our module is bi-directional. We take advantage of the inherent bi-directional nature of motion vectors and propagate information both forwards and backwards. Our module is also multi-hop as we can propagate mask between non-keyframes. These features make our propagation scheme less prone to drift and occlusion errors.
27
+
28
+ A closely related work to ours is CVOS [32]. CVOS aims to develop a stand-alone VOS framework based on compressed videos, whereas we are proposing a plug-and-play acceleration module. A shortcoming of CVOS is that it considers only I- and P-frames but not B-frames in their framework. This setting is highly restrictive and uncommon, since B-frames were introduced to the default encoding setting specified by the MPEG-1 standard [14] over 30 years ago. In contrast, we consider I-, P- and B-frames, making our method more applicable and practical for modern compressed video settings.
29
+
30
+ Our experiments demonstrate that our module offers considerable speed-ups on several image sequence-based models (see Fig. 1). As a by-product of the keyframe selection, our module also reduces the memory of existing memory-networks [20, 28], which are some of the fastest and most accurate state-of-the-art VOS methods. We summarize our contributions below:
31
+
32
+ - A novel VOS acceleration module that leverages information from the compressed video bitstream for segmentation mask propagation and correction.
33
+ - A soft propagation module that takes as input inaccurate and blocky motion vectors but yields highly accurate warps in a multi-hop and bi-directional manner.
34
+ - A mask correction module that refines propagation errors and artifacts based on motion residuals.
35
+ - Our plug-and-play module is flexible and can be applied to off-the-shelf VOS methods to achieve up to $3.5 \times$ speed-ups with negligible drops in accuracy.
36
+
37
+ # 2. Related work
38
+
39
+ Video object segmentation approaches are either semi-supervised, in which an initial mask is provided for the video, or unsupervised, in which no mask is available. We limit our discussion here to semi-supervised methods. Semi-supervised VOS methods can be further divided into two types: matching-based and propagation-based. Matching-based VOS methods rely on limited appearance changes to either match the template and target frame or to learn an object detector. For example, [2, 27, 35] fine-tune a segmentation network using provided and estimated masks with extensive data augmentation. Other examples include memory-networks [4, 5, 20, 28] that perform reference-query matching for the target object based on features extracted from previous frames. Propagation-based VOS methods rely on temporal correlations to propagate segmentation masks from the annotated frames. A simple propagation strategy is to copy the previous mask [23], assuming limited change from frame-to-frame. Others works use motion-based cues from optical flow [6, 10, 34].
40
+
41
+ Keyframe propagation. Frame-wise propagation of information from keyframes to non-keyframes has been used for efficient semantic video segmentation [13, 22, 45], but little has been explored for its role in efficient VOS [15] due to several reasons. Firstly, selecting keyframes is non-trivial. For maximum efficiency, keyframes should be as few and distinct as possible; yet if they are too distinct, the gap becomes too large to propagate across. As a result, existing works select keyframes conservatively with either uniform sampling [13, 45] or thresholding of changes in low-level features [16]. Secondly, frame-wise propagation relies on optical flow, and computing accurate flow fields [11, 33] is still computationally expensive.
42
+
43
+ Our proposed framework is propagation-based, but we differ from similar approaches in that we use the compressed video bitstream for propagation and correction. Our method adaptively selects key-frames, and it is also the first to use a bi-directional and multi-hop propagation scheme.
44
+
45
+ Compressed videos have been used in various vision tasks. Early methods [1, 26] used the compressed bitstream to form feature descriptors for unsupervised object segmentation and detection. In contrast, we utilize the bitstream for propagation and correction to accelerate semi-supervised VOS. More recently, the use of compressed videos has been explored for object detection [38], saliency detection [41], action recognition [40] and, as discussed earlier, VOS [32]. These works leverage motion vectors and residuals as motion cues or bit allocation as indicators of saliency. As features in the bitstream are inherently coarse, most of the previous works have a significant accuracy drop compared to methods that use full videos or optical flow. Our work is the first compressed video method that can fill this gap.
46
+
47
+ # 3. Preliminaries
48
+
49
+ # 3.1. Compressed video format
50
+
51
+ Video in its raw form is a sequence of RGB images; however, it is unnecessary to store all the image frames. Video compression encoder-decoders, or CODECs, leverage frame-to-frame redundancies to minimize storage. We outline some essentials of the HEVC codec [30]; other CODEcs like MPEG-4 [29] and H.264 [39] follow similar principles. Note that this section introduces only concepts relevant to understanding our framework. We refer to [31] for a more comprehensive discussion.
52
+
53
+ The HEVC coding structure consists of a series of frames called a Group of Pictures (GOP). Each GOP uses three frame types: I-frames, P-frames and B-frames. I-frames are fully encoded standalone, while P- and B-frames are encoded relatively based on motion compensation from other frames and residuals. Specifically, the P- and B-frames store motion vectors, which can be considered a block-wise analogue of optical flow between that frame and its' reference frame(s). Any discrepancies are then stored in that frame's residual. Fig. 2 shows the frame assignments of two sample GOPs. Video decoding is therefore an ordered process to ensure that reference frames are decoded first to preserve the chain of dependencies. Fig. 3 illustrates the dependencies in a sample sequence.
54
+
55
+ # 3.2. Motion compensation in compressed videos
56
+
57
+ A key difference between optical flow and motion vectors is that optical flow is a dense vector field with respect to a neighbouring frame in time, whereas motion vectors are block-wise displacements with respect to arbitrary reference frame(s) within the GOP. The associated blocks are called Prediction Units (PU), and they vary in size from $64 \times 64$ to $8 \times 4$ or $4 \times 8$ pixels. PUs can be uni-directional, with reference frames from either the past or the future, or bi-directional, with references to both the past and the future. P-frames have only uni-directional PUs, while B-frames have both uni-directional and bi-directional PUs.
58
+
59
+ In this work, we denote a PU as $\Omega_{ij}$ , with constituent pixels $(x,y) \in \Omega_{ij}^2$ , where $i$ indexes the frame and $j$ indexes the PU in frame $i$ . In the general bi-directional case, $\Omega_{ij}$ is associated with a pair of forward and backward motion vectors $(\vec{v}_{ij}, \vec{\mathrm{v}}_{ij})$ , where the right and left arrows denote forward and backward motion, respectively. The forward motion vector $\vec{\mathbf{v}}_{ij} = [\vec{u}, \vec{v}, \vec{t}]$ is given by displacements $\vec{u}$ and $\vec{v}$ and reference frame $\vec{t}$ , where $\vec{t} < i$ ; analogously, $\vec{\mathbf{v}}_{ij} = [\vec{u}, \vec{v}, \vec{t}]$ denotes a backward motion vector with displacements $[\vec{u}, \vec{v}]$ and reference frame $\vec{t}$ , where $\vec{t} > i$ .
60
+
61
+ Based on the motion vectors, the pixels $(x,y)\in \Omega_{ij}$ can be predicted from co-located blocks of the same size as $\Omega_{ij}$
62
+
63
+ ![](images/7d0660a41db56a84f7d5b63a92193c29853a9df774fc27c95aa122a0cba401b2.jpg)
64
+
65
+ ![](images/92e2df4c79042a2bd1aca45eaed88986d7d9d4a528438294bbc485d2df67206f.jpg)
66
+
67
+ ![](images/3f88e6f9fe5e2813300e3f1f395436083e40d62d3af3e44614e52896cf554057.jpg)
68
+ Figure 2. Bar plots of a GOP visualizing frame assignments and the relative frame size. The 'bmx-trees' sequence has faster movements so it has more I/P-frames than 'bear' (37.5% vs. 23.2%). The red arrows mark displayed frames, which feature examples of block effects for the 'bear' sequence above and motion vector estimation failures for the 'bmx-trees' sequence below.
69
+
70
+ from reference frames $I_{\vec{t}}$ and $I_{\bar{t}}$ . The reconstructed frame $\hat{I}_i^{x,y}$ at $(x,y)$ of frame $i$ , for $(x,y) \in \Omega_{ij}$ , is given as
71
+
72
+ $$
73
+ \hat {I} _ {i} ^ {x, y} = \vec {w} I _ {\vec {t}} ^ {x + \vec {u}, y + \vec {v}} + \vec {w} I _ {\vec {t}} ^ {x + \vec {u}, y + \vec {v}}, \tag {1}
74
+ $$
75
+
76
+ where $(\vec{w},\vec{w})$ are weighting components for the forward and backward motions, respectively, and $\vec{w} +\vec{w} = 1$ . In the case of a uni-directional PU, either $\vec{w}$ or $\vec{w}$ would be set to 0 and the corresponding $\vec{\mathbf{v}}$ or $\vec{\mathbf{v}}$ is undefined.
77
+
78
+ In older and more restrictive codec settings, such as those used in CVOS [32], reference frames were limited to I-frames. Modern codecs like HEVC, i.e. what we consider in this work, allow P- and B-frames to reference pixels in other P- and B-frames, which are themselves reconstructed from other references. This makes the reconstruction in Eq. (1) multi-hop, which improves overall coding efficiency as the drifting problem can be alleviated with smaller temporal reference distance. Examples of PUs and frame predictions are illustrated in Fig. 3. Motion vectors are inherently coarse and noisy, due to their block-wise nature and encoding errors in areas of fast and abrupt movements (see examples in Fig. 2). As such, the remaining differences between the RGB image $I_{i}$ and prediction $\hat{I}_i$ at frame $i$ are stored in the residual $\mathbf{e}_i$ to recover pixel-level detailing:
79
+
80
+ $$
81
+ I _ {i} = \hat {I} _ {i} + \mathbf {e} _ {i}. \tag {2}
82
+ $$
83
+
84
+ ![](images/2b38560abb7ff6fa7f0f4ce216427e7c82a303295f59fb9c6d69a25898d727db.jpg)
85
+ Figure 3. GOP schematic. Dashed lines denote motion compensation in prediction blocks. $I$ , $B$ and $P$ denote frame types.
86
+
87
+ In principle, $\mathbf{e}_i$ is sparse; the sparsity is directly correlated with the accuracy of the motion vector prediction. The key to efficient video encoding is balancing the storage savings of using larger PUs for P- and B-frames, i.e. fewer motion vectors, versus requiring less sparse residuals to compensate for the coarser block motions.
88
+
89
+ # 3.3. Dense frame-wise motion representation
90
+
91
+ Performing frame-wise propagation directly from the motion vectors can be cumbersome as the vectors are defined block-wise according to PUs. The PUs in a given frame often have several (different) references over multiple hops. As such, we compute a dense frame-wise motion field to serve as a more convenient intermediate representation. Specifically, we define a bi-directional motion field as $M_{i} = [\vec{M}_{i},\vec{M}_{i}]$ , where $\vec{M}_i\in \mathbb{R}^{H\times W\times 3}$ is a dense pixel-wise representation of forward motions for frame $i$ and is represented by $[\vec{u},\vec{v},\vec{t}]$ , i.e. the displacements and the reference frame. Similar to the motion vectors, the right- and left-arrowed accents denote forward and backward motions respectively. As such, $\tilde{M}_i\in \mathbb{R}^{H\times W\times 3}$ stores backward motions for frame $i$ represented by $[\vec{u},\vec{v},\vec{t}]$ . The motion components are determined by aggregating all the PUs $\{\Omega_{ij}\} ,j\in \{1\dots J_i\}$ , where $J_{i}$ is the total number of PUs in frame $i$ . i.e.
92
+
93
+ $$
94
+ \vec {\mathbf {v}} _ {\Omega , i} \rightarrow \vec {M} _ {i} ^ {x, y}; \quad \left. \vec {\mathbf {v}} _ {\Omega , i} \rightarrow \vec {M} _ {i} ^ {x, y}, \quad (x, y) \in \Omega_ {i j}. \right. \tag {3}
95
+ $$
96
+
97
+ This assignment procedure, which is denoted by $\rightarrow$ , iterates through all the spatial locations of frame $i$ . If a given PU in the B-frame is uni-directional, then the elements in the opposite direction in either $\vec{M}$ or $\vec{M}$ is set to zero accordingly. For pixels where $\vec{t}$ or $\vec{t}$ is directed to a keyframe, the prediction is single-hop; for pixels where $\vec{t}$ or $\vec{t}$ is directed to another non-keyframe, this will be multi-hop as the current reference is chained to further references.
98
+
99
+ # 4. Methodology
100
+
101
+ We accelerate off-the-shelf VOS methods by applying these methods as a base network to selected keyframes (Sec. 4.4). The keyframe segmentations are propagated to non-keyframes with a soft motion vector propagation module (Sec. 4.2) and further refined via a residual-based cor
102
+
103
+ rection module (Sec. 4.3). Fig. 4 illustrates the overall framework. The acceleration comes from the computational savings of propagation and correction compared to applying the base network to all frames in the sequence.
104
+
105
+ # 4.1. Problem formulation
106
+
107
+ We denote the decoded sequence from a compressed video bitstream of length $T$ as $\{(I_i, M_i, \mathbf{e}_i), i \in [1, T]\}$ .
108
+
109
+ For convenience, we directly use the motion field $M_{i}$ instead of the raw motion vectors. Note that after decoding, we already have access to the RGB image $I_{i}$ for frame $i$ . For $P$ and $B$ frames, $I_{i}$ is reconstructed from the motion-predicted frame $\hat{I}_{i}$ and the residual $\mathbf{e}_i$ based on Eq. (2). For clarity, we maintain two redundant frame indices $n$ and $k$ for referring to non-keyframes and keyframes, respectively. We denote the base network as $\{F,G\}$ . The first portion of the network $F$ extracts low-level appearance features $V_{k}$ from the input keyframe $I_{k}$ ; $G$ denotes the subsequent part of the network that further processes $V_{k}$ to estimate the segmentation $P_{k}$ , i.e., for a keyframe $k$ ,
110
+
111
+ $$
112
+ V _ {k} = F \left(I _ {k}\right), \quad P _ {k} = G \left(V _ {k}\right), \tag {4}
113
+ $$
114
+
115
+ where $P_{k}\in \mathbb{R}^{H\times W\times O}$ and $V_{k}\in \mathbb{R}^{H\times W\times C}$ . Here, $O$ is the number of objects in the video sequence, $C$ is the number of channels for the low-level feature and $H\times W$ is the spatial resolution of the prediction.
116
+
117
+ For a non-keyframe $I_{n}$ , a standard approach [45] to propagate the segmentation predictions from a keyframe $k$ is to apply a warp based on the optical flow:
118
+
119
+ $$
120
+ \tilde {P} _ {n} = W \left(\mathrm {O F} _ {n}, P _ {k}\right), \tag {5}
121
+ $$
122
+
123
+ where $W$ is the warping operation, $OF_{n}$ is the optical flow between $P_{n}$ and $P_{k}$ , and $\tilde{P}$ is the propagated predictions. This form of propagation has two key drawbacks. Firstly, most schemes use optical flow computed only between two frames, which increases the possible errors that arise from occlusion. Secondly, estimating accurate optical flows still comes with considerable computational expense.
124
+
125
+ # 4.2. Soft motion-vector propagation module
126
+
127
+ In this section, we outline how motion vectors, specifically the motion vector field $M_{n}$ defined in Eq. (3) for a non-keyframe $I_{n}$ , can be used in place of optical flow $\mathrm{OF}_n$ in Eq. (5). We first introduce the motion vector warping operation, in which $\hat{P}_n$ and $\hat{V}_n$ denote the motion vector warped prediction and warped features, i.e.
128
+
129
+ $$
130
+ \hat {P} _ {n} = W _ {M V} \left(M _ {n}, P _ {\star}\right), \quad \hat {V} _ {n} = W _ {M V} \left(M _ {n}, V _ {\star}\right), \tag {6}
131
+ $$
132
+
133
+ where $P_{\star}$ and $V_{\star}$ denote the corresponding segmentations and features for key- and non-key reference frames, respectively. The warping operation $W_{MV}$ is defined as a backward warp which iterates over all the spatial locations of
134
+
135
+ ![](images/f628f969d8858f2769360587f9a4a0aab05db32bd4903d98c10a6593feb5a7bf.jpg)
136
+ Figure 4. Overall framework. Keyframe segmentation predictions are propagated to non-keyframes through a soft motion vector propagation module that suppresses inaccurate motion vectors. Propagated masks are then corrected based on the residuals and feature matching.
137
+
138
+ frame $n$ . If we denote with $\Lambda$ the item, i.e. $P_{\star}$ or $V_{\star}$ , to be propagated, such that $\hat{\Lambda}_n = W_{MV}(M_n, \Lambda)$ , then the propagated value at $(x, y)$ for a non-keyframe $n$ , based on Eq. (1) can be defined as:
139
+
140
+ $$
141
+ \hat {\Lambda} _ {n} ^ {x, y} = \left\{ \begin{array}{l l} \Lambda_ {\bar {t}} ^ {x + \bar {u}, y + \bar {v}}, & \text {i f} \bar {t} = 0, \\ \Lambda_ {\bar {t} ^ {\prime}} ^ {x + \bar {u}, y + \bar {v}}, & \text {i f} \bar {t} = 0, \\ \frac {1}{2} \Lambda_ {\bar {t}} ^ {x + \bar {u}, y + \bar {v}} + \frac {1}{2} \Lambda_ {\bar {t} ^ {\prime}} ^ {x + \bar {u}, y + \bar {v}}, & \text {o t h e r w i s e .} \end{array} \right. \tag {7}
142
+ $$
143
+
144
+ $$
145
+ w h e r e: \quad [ \vec {u}, \vec {v}, \vec {t}, \bar {u}, \bar {v}, \bar {t} ] = M _ {n} ^ {x, y}. \tag {8}
146
+ $$
147
+
148
+ The first two cases in Eq. (7) are for warping unidirectional motion vectors forwards and backwards in time, respectively, and the third case is used for bi-directional motion vectors. Note that in the third case, the forward and backward motion vectors are equally weighted and not according to $\vec{w}$ and $\vec{w}$ from Eq. (1). This is because we interpret the references to be equally indicative of the target mask; also, $\vec{w}$ and $\vec{w}$ are tuned for reconstructing the target RGB pixel value. In the case when $u$ , $v$ are not integers, nearest-neighbours or bilinear interpolation will be applied in the reference map; for simplicity, we omit the interpolation in the formulation. If the reference frame $\vec{t}$ or $\vec{t}$ is not a keyframe, then the warping becomes multi-hop. Hence, the warping procedure must follow the decoding order, as referenced non-keyframes must be completed before it can be propagated onwards. To mitigate the impact of noise and errors in the motion vector field, we propose a soft propagation scheme that makes use of a learned decoder $\mathcal{D}(\cdot)$ :
149
+
150
+ $$
151
+ \tilde {P} _ {n} = \mathcal {D} \left(\left[ \hat {P} _ {n}, V _ {n}, S \left(V _ {n}, \hat {V} _ {n}\right) \cdot \hat {P} _ {n} \right]\right), \tag {9}
152
+ $$
153
+
154
+ where the square braces $[,]$ denote concatenation. The decoder is lightweight, and denoises the originally propagated
155
+
156
+ mask $\hat{P}_n = W_{MV}(M_n,P)$ based on the low-level features of the input frame $I_{n}$ , i.e. $V_{n} = F(I_{n})$ , and a confidence-weighted version of the propagated mask. The weighting term $S(V_{n},\hat{V}_{n})\in \mathbb{R}^{H\times W}$ is defined by a similarity between the extracted features $V_{n}\in \mathbb{R}^{H\times W\times C}$ and the propagated features $\hat{V}_n\in \mathbb{R}^{H\times W\times C}$ . We use dot product along the channel dimension to represent the similarity, i.e.
157
+
158
+ $$
159
+ S \left(V _ {n}, \hat {V} _ {n}\right) ^ {i j} = \sigma \left(V _ {n} ^ {i j} \cdot \hat {V} _ {n} ^ {i j}\right), \tag {10}
160
+ $$
161
+
162
+ where $\sigma$ is the standard sigmoid function. The similarity between the propagated features $\hat{V}_n$ and the actually estimated features $V_{n}$ serves as a confidence indicator to the decoder where the propagation is likely accurate. In areas which are not similar, the motion vector is likely inaccurate, so the propagated values should likely be suppressed and require more denoising.
163
+
164
+ # 4.3. Residual-based correction module
165
+
166
+ We introduce an additional correction module to further improve the quality of the propagated segmentation masks. As errors of the motion vectors are captured inherently in each frame's residuals, it is natural to use these as a cue for compensation. We choose to model such correction through patching generation and label matching explicitly. While implicitly adding residual to the decoder network could achieve similar performance, it requires relatively more data and a heavier decoder network.
167
+
168
+ Let $\mathbf{e} \in \mathbb{R}^{H \times W \times 3}$ and $\hat{\mathbf{S}}^3$ denote the residuals and the propagated foreground mask, where $\hat{\mathbf{S}}$ can be obtained by taking $\text{argmax}$ of propagated prediction $\hat{P}$ . We first convert $\mathbf{e}$ into a greyscale image before converting it into a binary
169
+
170
+ ![](images/1cb0c707d1f2cf2c4ecda4adfdcc1d5b8f9152c85fa8468b7f1d6e5e40d11282.jpg)
171
+ Figure 5. Residual-based correction module selects pixels to correct in the propagated mask; the correction scheme replaces the segmentation labels via a feature matching scheme.
172
+
173
+ mask $\mathbf{e}_b$ via thresholding. The corrected mask $\tilde{\mathbf{S}}$ is found by taking the intersection between $\mathbf{e}_b$ and $\hat{\mathbf{S}}_+$ , a dilated version of initially propagated mask $\hat{\mathbf{S}}$ , i.e., $\hat{\mathbf{S}} = \cap (\mathbf{e}_b, \hat{\mathbf{S}}_+)$ , where $\cap (\cdot)$ indicates an intersection operation and allows us to focus only on foreground areas of the dilated mask, which coincide with thresholded residual values.
174
+
175
+ $\tilde{\mathbf{S}}$ provides an indication of which areas in the propagated mask will require correction. For each pixel in $\tilde{\mathbf{S}}$ indexed by $a$ at frame $n$ , we search in the temporally closest keyframe $k^*$ and match between $V_{n}$ and $V_{k^{*}}$ . Specifically, we define $\mathbf{W}^{ak}$ as the affinity between the feature at pixel $a$ in $V_{n}$ , i.e. $V_{n}^{a}$ , and all pixels in $V_{k^{*}}$ . The corrected mask prediction at pixel $a$ is then obtained by $P_{n}^{a} = \mathbf{W}^{ak}P_{k^{*}}$ . We use an L2-similarity function to compute the affinity matrix and defer the details to the Supplementary.
176
+
177
+ # 4.4. Keyframe & base network selection
178
+
179
+ In principle, any frame can be a keyframe. However, it is natural to define keyframes according to the compressed frame type, as the encoder designates types based on the video's dynamic content. In addition to I-frames, we also choose P-frames as keyframes. This is because less than $5\%$ of frames in a video sequence are I-frames in the default HEVC encoding, which is insufficient for accurate propagation, so we also include the $15 - 35\%$ of frames designated as P-frames. Considering P-frames as keyframes also helps improve the accuracy because the motion compensation in P-frames is strictly uni-directional. Otherwise, propagation to these frames may suffer inaccuracies arising from occlusions in the same manner as optical flow.
180
+
181
+ For a base VOS model to be accelerated, most matching based segmentation models discussed in Sec. 2 are suitable as they rely only on the appearance of the target object. From preliminary experiments, we observed that VOS methods that use memory-networks such as STM [20], MiVOS [4], and STCN [5] are ideal for acceleration. This is because the choice of using I- and P-frames as keyframes naturally aligns with the memory concept and allows for the selection of a (even more) compact yet diverse memory.
182
+
183
+ # 5. Experimentation
184
+
185
+ # 5.1. Experimental settings
186
+
187
+ Video Compression. We generated compressed video from images using the x265 library in FFmpeg on the default preset. To write out the bitstream, we modified the decoder from openHEVC [8, 9] and shared the code publicly to encourage others to work with compressed video.
188
+
189
+ Datasets & Evaluation. We experimented with three video object segmentation benchmarks: DAVIS16 [24] and DAVIS17 [25], which are small datasets with 50 and 120 videos of single and multiple objects, respectively, and YouTube-VOS [42], a large-scale dataset with 3945 videos of multiple objects. We used the images in their original resolution for encoding the videos. The default HEVC encoding produced an average of $\{37\%, 36\%, 27\%$ of I/P-frames, and therefore keyframes per sequence for DAVIS16, DAVIS17 and YouTube-VOS, respectively.
190
+
191
+ We evaluated with the standard criteria from [24]: Jaccard Index $\mathcal{J}$ (IoU of the output segmentation with groundtruth mask) for region similarity, and mean boundary $\mathcal{F}$ -scores for contour accuracy. Additionally, we report the average over all seen and unseen classes for YouTube-VOS.
192
+
193
+ Propagation & Correction. In our propagation scheme, we applied reverse mapping for warping and nearest-neighbour interpolation kernels. The decoder in the soft propagation (Sec. 4.2) is a lightweight network of three residual blocks (see Supplementary for details). The decoder is trained from scratch, with a uniform initialization and a learning rate of 1e-4 with a decay factor of 0.1 every 10k iterations for 40k iterations. For residual-based correction, the binary threshold was set to $0.15*255$ for the absolute value of gray-scaled residual.
194
+
195
+ Base Models. We show experiments accelerating four base models: STM [20], MiVOS [4], STCN [4] and FRTM-VOS [27]. The first three use a memory bank; for a fair comparison, we allow only keyframes to be stored in the memory bank. We set the memory frequency to 2 on DAVIS and 5 on Youtube-VOS, as the latter has higher frame rates. In the experiments, both settings reduced the memory bank size. We refer to Supplementary for the memory analysis. FRTM-VOS fine-tunes a network based on the labelled frame and associated augmentations. We fed only the keyframes into the network for segmentation and finetuning. In practice, this is equivalent to segmenting a temporally reduced video.
196
+
197
+ # 5.2. Acceleration on different base models.
198
+
199
+ Tab. 1 compares our accelerated results on the four base models with other state-of-the-art models. Our method achieves an excellent compromise between accuracy and speed. On DAVIS16 ( $\approx$ $37\%$ keyframes), we achieved $1.3 \times$ , $2.1 \times$ , $2.2 \times$ , $1.6 \times$ speed-ups with a minor drop of
200
+
201
+ Table 1. Comparison of acceleration on different base models with state-of-the-art methods. $\dagger$ Frame rates were measured on our device if originally not provided; we also re-estimated STM time on our hardware as we obtained higher FPS than their reported value. FPS on Youtube-VOS is measured on the first 30 videos.
202
+
203
+ <table><tr><td rowspan="2">Method</td><td colspan="4">DAVIS16 validation</td><td colspan="4">DAVIS17 validation</td><td colspan="6">YouTube-VOS 2018 validation</td></tr><tr><td>J</td><td>F</td><td>J&amp;F</td><td>FPS</td><td>J</td><td>F</td><td>J&amp;F</td><td>FPS</td><td>G</td><td>Js</td><td>Fs</td><td>Ju</td><td>Fu</td><td>FPS</td></tr><tr><td>CVOS [32]</td><td>79.1</td><td>80.3</td><td>79.7</td><td>34.5</td><td>57.4</td><td>59.3</td><td>58.4</td><td>31.2</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>TVOS [44]</td><td>-</td><td>-</td><td>-</td><td>-</td><td>69.9</td><td>74.7</td><td>72.3</td><td>37</td><td>67.8</td><td>67.1</td><td>69.4</td><td>63.0</td><td>71.6</td><td>-</td></tr><tr><td>Track-Seg [3]</td><td>82.6</td><td>83.6</td><td>83.1</td><td>39</td><td>68.6</td><td>76.0</td><td>72.3</td><td>&lt;39</td><td>63.6</td><td>67.1</td><td>70.2</td><td>55.3</td><td>61.7</td><td>-</td></tr><tr><td>PReMVOS [17]</td><td>84.9</td><td>88.6</td><td>86.8</td><td>0.03</td><td>73.9</td><td>81.7</td><td>77.8</td><td>&lt;0.03</td><td>66.9</td><td>71.4</td><td>75.9</td><td>56.5</td><td>63.7</td><td>-</td></tr><tr><td>SwiftNet [36]</td><td>90.5</td><td>90.3</td><td>90.4</td><td>25</td><td>78.3</td><td>83.9</td><td>81.1</td><td>25</td><td>77.8</td><td>77.8</td><td>81.8</td><td>72.3</td><td>79.5</td><td>-</td></tr><tr><td>CFBI+ [43]</td><td>88.7</td><td>91.1</td><td>89.9</td><td>5.6</td><td>80.1</td><td>85.7</td><td>82.9</td><td>&lt;5.6</td><td>82.0</td><td>81.2</td><td>86.0</td><td>76.2</td><td>84.6</td><td>-</td></tr><tr><td>FRTM-VOS [27]</td><td>-</td><td>-</td><td>83.5</td><td>21.9</td><td>-</td><td>-</td><td>76.7</td><td>†14.1</td><td>72.1</td><td>72.3</td><td>76.2</td><td>65.9</td><td>74.1</td><td>†7.7</td></tr><tr><td>FRTM-VOS + CoVOS</td><td>82.3</td><td>82.2</td><td>82.3</td><td>28.6</td><td>69.7</td><td>75.2</td><td>72.5</td><td>20.6</td><td>65.6</td><td>68.0</td><td>71.0</td><td>58.2</td><td>65.4</td><td>25.3</td></tr><tr><td>STM [20]</td><td>88.7</td><td>89.9</td><td>89.3</td><td>†14.9</td><td>79.2</td><td>84.3</td><td>81.8</td><td>†10.6</td><td>79.4</td><td>79.7</td><td>84.2</td><td>72.8</td><td>80.9</td><td>-</td></tr><tr><td>STM + CoVOS</td><td>87.0</td><td>87.3</td><td>87.2</td><td>31.5</td><td>78.3</td><td>82.7</td><td>80.5</td><td>23.8</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>MiVOS [4]</td><td>89.7</td><td>92.4</td><td>91.0</td><td>16.9</td><td>81.7</td><td>87.4</td><td>84.5</td><td>11.2</td><td>82.6</td><td>81.1</td><td>85.6</td><td>77.7</td><td>86.2</td><td>†13</td></tr><tr><td>MiVOS + CoVOS</td><td>89.0</td><td>89.8</td><td>89.4</td><td>36.8</td><td>79.7</td><td>84.6</td><td>82.2</td><td>25.5</td><td>79.3</td><td>78.9</td><td>83.0</td><td>73.5</td><td>81.7</td><td>45.9</td></tr><tr><td>STCN [5]</td><td>90.4</td><td>93.0</td><td>91.7</td><td>26.9</td><td>82.0</td><td>88.6</td><td>85.3</td><td>20.2</td><td>84.3</td><td>83.2</td><td>87.9</td><td>79.0</td><td>87.3</td><td>†16.8</td></tr><tr><td>STCN + CoVOS</td><td>88.5</td><td>89.6</td><td>89.1</td><td>42.7</td><td>79.7</td><td>85.1</td><td>82.4</td><td>33.7</td><td>79.0</td><td>79.4</td><td>83.6</td><td>72.6</td><td>80.4</td><td>57.9</td></tr></table>
204
+
205
+ $\mathcal{J}\& \mathcal{F} 1.2, 2.1, 1.6, 2.6$ on FRTM-VOS, STM, MiVOS and STCN, respectively. On DAVIS17 ( $\approx 36\%$ keyframes), we achieved $1.5 \times, 2.2 \times, 2.3 \times, 1.7 \times$ speed-ups with the drop on $\mathcal{J}\& \mathcal{F} 4.2, 1.3, 1.7, 2.9$ for the same order of models.
206
+
207
+ On YouTube-VOS ( $\approx 27\%$ keyframes), we achieved $3.3 \times, 3.5 \times, 3.4 \times$ speed-ups with a 4.8, 2.4, 4.0 drop of $\mathcal{I}_s \& \mathcal{F}_s$ for FRTM-VOS, MiVOS and STCN, respectively. We have larger drops on $\mathcal{I}_u \& \mathcal{F}_u$ for unseen data because our decoder is not pre-trained on larger datasets. Note that the video lengths of YouTube-VOS are relatively long ( $>150$ frames), so the above methods require additional memory or additional online fine-tuning, which allows us to achieve higher speed-ups. Moreover, the lower keyframe percentage of YouTube-VOS also provides more speed-ups. We do not provide the result on STM as no pretrained weights are available.
208
+
209
+ With an STCN base model, our performance on DAVIS17 is 1.3 to 10.1 higher than other efficient methods SwiftNet [36], TVOS [44] and Track-Seg [3] with comparable frame rates, though our success should also be attributed to the high STCN base accuracy. Another compressed video method CVOS [32] achieves comparable speed but has a significant accuracy gap.
210
+
211
+ # 5.3. Ablation studies
212
+
213
+ We verified each component of our framework. All ablations used MiVOS [4] as the base model on default video encode preset unless otherwise indicated.
214
+
215
+ Propagation. We first compare with optical flow as a form of propagation, and consider a forward unidirectional flow warping as done in [7, 23, 45], using the flow from the state-of-the-art method RAFT [33] ('Optical Flow'). We also consider a bi-directional optical flow warping ('Bi-Optical Flow'), which is used in [21]. Additionally, we compare with two motion vector baselines from a work on
216
+
217
+ Table 2. Comparison of propagation methods, $B'$ , $M'$ , $Sup'$ denotes bi-directional, multi-hop and noise suppression, respectively. No code given, we report the results from earlier work [32]
218
+
219
+ <table><tr><td rowspan="2">Method</td><td colspan="5">DAVIS16</td><td colspan="2">DAVIS17</td></tr><tr><td>B</td><td>M</td><td>Sup</td><td>J</td><td>F</td><td>J</td><td>F</td></tr><tr><td>Optical Flow</td><td></td><td></td><td></td><td>77.4</td><td>79.2</td><td>71.5</td><td>77.6</td></tr><tr><td>Bi-Optical Flow [21]</td><td>X</td><td></td><td></td><td>85.0</td><td>87.4</td><td>75.9</td><td>81.7</td></tr><tr><td>MV I to P [32]</td><td></td><td></td><td></td><td>\( 31.5^† \)</td><td>-</td><td>-</td><td>-</td></tr><tr><td>MV to Flow [40]</td><td></td><td></td><td></td><td>77.2</td><td>80.2</td><td>69.4</td><td>76.3</td></tr><tr><td>MV Warp</td><td>X</td><td>X</td><td></td><td>85.7</td><td>89.2</td><td>77.2</td><td>84.4</td></tr><tr><td>MV Soft Prop</td><td>X</td><td>X</td><td>X</td><td>89.0</td><td>89.8</td><td>79.7</td><td>84.6</td></tr><tr><td>No propagation [4]</td><td></td><td></td><td></td><td>89.7</td><td>92.4</td><td>81.7</td><td>87.4</td></tr></table>
220
+
221
+ compressed videos, CoViAR [40] ('MV to Flow'), and another work on compressed VOS, CVOS [32] ('MVI to P'). CoViAR converts motion vectors into a flow between two frames, i.e. for motion vector field $M_{i}$ at $(x,y)$ and frame $i$ , $M_{of}(x,y) = [u,v] / [(t - i)\cdot fps]$ is the motion of the pixel $(x,y)$ in the unit time on plane $i$ . CVOS has a further simplified motion vector usage and references all motions from one I-frame in the GOP. We compare our bi-directional, multi-hop motion vector warping with ('MV Soft Prop') and without ('MV Warp') the soft propagation, which performs further noise suppression.
222
+
223
+ Tab. 2 verifies the effectiveness of our proposed propagation. Bi-directional optical flow, originally used for video generation [21], performs better than unidirectional optical flow because it is less affected by occlusion. CoViAR [40] is a compressed video action recognition system; their propagation is on par with optical flow. The simplified case in CVOS [32] fails to propagate meaningful segmentation masks and thus relies on heavy refinement.
224
+
225
+ Our bi-directional multi-hop motion vector-based warping outperforms all of the above methods. Our soft propagation scheme with noise suppression gives further improvements to the accuracy, such that our propagated masks are
226
+
227
+ within 4.0 points on both $\mathcal{I}$ and $\mathcal{F}$ below the upper bound without propagation, i.e. by applying each frame through the base network. Fig. 6 shows qualitative results on different propagation methods.
228
+
229
+ Table 3. Ablations on decoder and mask correction module.
230
+
231
+ <table><tr><td rowspan="2">Module</td><td colspan="2">DAVIS16</td><td colspan="2">DAVIS17</td></tr><tr><td>J</td><td>F</td><td>J</td><td>F</td></tr><tr><td>MV Warp</td><td>85.7</td><td>89.2</td><td>77.2</td><td>84.4</td></tr><tr><td>+Decoder</td><td>88.3</td><td>88.8</td><td>79.2</td><td>84.0</td></tr><tr><td>+Suppression</td><td>88.8</td><td>89.6</td><td>79.6</td><td>84.5</td></tr><tr><td>+Residual Correction</td><td>89.0</td><td>89.8</td><td>79.7</td><td>84.6</td></tr></table>
232
+
233
+ Decoder and mask correction. Tab. 3 shows how adding each component of the mask decoder leads to progressive improvements for the $\mathcal{J}$ -index and boundary $\mathcal{F}$ -score. For 'MV Warp', we directly warp the prediction results on the original size of the frame. For the decoder, we warp the prediction and low-level features at $1/4$ size for speed consideration. Because the motion vector is coarse and noisy, only input propagated prediction and the low-level features to the decoder will decrease accuracy. The most significant gains come from the noise suppression module, i.e. by feeding the suppressed propagated prediction into the decoder. Further residual correction increases the robustness for the corner cases.
234
+
235
+ Keyframe percentage. To highlight the speed-accuracy trade-off, we compare the percentage of keyframes in Tab. 4 by adjusting encoder presets. The default HEVC setting yields $\approx 37\%$ keyframes for DAVIS16 and DAVIS17. If we set the encoder to allocate more B-frames to have only approximately $25\%$ and $13\%$ keyframes ('B-frame biased' and 'Uniform B-frames', respectively), the propagated scores decrease while the FPS values increase accordingly. At the fastest setting, we can achieve $3.7\mathrm{x}$ speed-ups on MiVOS with $\mathcal{J} \& \mathcal{F}$ scores of 82.9 on DAVIS16 and $4.5\mathrm{x}$ speed-ups with $\mathcal{J} \& \mathcal{F}$ scores of 73.2 on DAVIS17.
236
+
237
+ Table 4. Robustness to different video encoding presets on DAVIS16 and DAVIS17. B-frame biased: more weight on B-frame allocation (x265 option: bframe-bias=50). Uniform B-frames: fixed 8 B-frames between I/P frames.
238
+
239
+ <table><tr><td rowspan="2">Preset</td><td rowspan="2">Keyframe</td><td colspan="2">DAVIS16</td><td colspan="2">DAVIS17</td></tr><tr><td>J&amp;F</td><td>FPS</td><td>J&amp;F</td><td>FPS</td></tr><tr><td>Default</td><td>≈ 37%</td><td>89.4</td><td>36.8</td><td>82.2</td><td>25.5</td></tr><tr><td>B-frame biased</td><td>≈ 25%</td><td>85.1</td><td>48.2</td><td>80.2</td><td>36.7</td></tr><tr><td>Uniform B-frames</td><td>≈ 13%</td><td>82.9</td><td>62.9</td><td>73.2</td><td>50.0</td></tr><tr><td>No Propagation</td><td>-</td><td>91.0</td><td>16.9</td><td>84.5</td><td>11.2</td></tr></table>
240
+
241
+ # 5.4. Timing analysis
242
+
243
+ To compute the FPS values in all our tables, we measured run times on an RTX-2080Ti for DAVIS dataset and on an RTX-A5000 for YouTube-VOS, as it requires extra memory. The amortized per frame inference time can be approx
244
+
245
+ ![](images/c7f95ea3d30eded23f3886e3ccb542334a4b10292f5e9a7f0030002176198371.jpg)
246
+ Figure 6. Optical flow propagation and motion-vector generated flows both suffer from ghosting effects and holes in areas of occlusion. Our propagation successfully prevents such artifacts.
247
+
248
+ imately computed by $T_{\mathrm{base}} \cdot R + (T_{\mathrm{propagation}} + T_{\mathrm{correction}}) \cdot (1 - R)$ , where $R$ denotes the ratio of keyframes. Note that the measured $T_{\mathrm{base}}$ may not correspond to the published FPS values of the base model, e.g. for STM [20] and MiVOS [4]. Our $T_{\mathrm{base}}$ is lower because we store fewer frames in the memory bank (see Supplementary for more details). We measured the propagation and correction time on DAVIS17, and the sum $(T_{\mathrm{propagation}} + T_{\mathrm{correction}})$ is 12ms.
249
+
250
+ # 6. Conclusion & limitations
251
+
252
+ We propose an acceleration framework for semi-supervised VOS via propagation by exploiting the motion vectors and residuals of the compressed video bitstream. Such a framework can speed up the accurate but slow base VOS models with minor drops in segmentation accuracy. One limitation of our work is the possible latency introduced by the multiple reference dependencies. As a result, segmentation results of a non-keyframe get completed later than the future frame to which it refers.
253
+
254
+ Given that $70\%$ of the internet traffic [12] is dedicated to (compressed) videos, we see broad applicability of our work for acceleration. Efficiency in VOS methods is especially relevant for applications such as video editing, given the growing trend of higher resolution videos, e.g. 4K standards. However, VOS could also be abused to falsify parts of videos or create malicious content. We maintain a rigorous attitude towards this while emphasizing its positive impact on content creation and other possible improvements for the community.
255
+
256
+ # 7. Acknowledgements
257
+
258
+ This research is supported by the National Research Foundation, Singapore under its NRF Fellowship for AI (NRF-NRFFAI1-2019-0001). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
259
+
260
+ # References
261
+
262
+ [1] R Venkatesh Babu, KR Ramakrishnan, and SH Srinivasan. Video object segmentation: a compressed domain approach. TCSTV, 2004. 2
263
+ [2] Sergi Caelles, Kevis-Kokitsi Maninis, Jordi Pont-Tuset, Laura Leal-Taixe, Daniel Cremers, and Luc Van Gool. One-shot video object segmentation. In CVPR, 2017. 1, 2
264
+ [3] Xi Chen, Zuoxin Li, Ye Yuan, Gang Yu, Jianxin Shen, and Donglian Qi. State-aware tracker for real-time video object segmentation. In CVPR, 2020. 1, 7
265
+ [4] Ho Kei Cheng, Yu-Wing Tai, and Chi-Keung Tang. Modular interactive video object segmentation: Interaction-to-mask, propagation and difference-aware fusion. In CVPR, 2021. 1, 2, 6, 7, 8
266
+ [5] Ho Kei Cheng, Yu-Wing Tai, and Chi-Keung Tang. Rethinking space-time networks with improved memory coverage for efficient video object segmentation. In NeurIPS, 2021. 1, 2, 6, 7
267
+ [6] Jingchun Cheng, Yi-Hsuan Tsai, Shengjin Wang, and Ming-Hsuan Yang. Segflow: Joint learning for video object segmentation and optical flow. In ICCV, 2017. 2
268
+ [7] Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick Van Der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional networks. In ICCV, 2015. 7
269
+ [8] Wassim Hamidouche, Michael Raulet, and Olivier Déforges. Parallel shvc decoder: Implementation and analysis. In ICME, 2014. 6
270
+ [9] Wassim Hamidouche, Michael Raulet, and Olivier Déforges. Real time shvc decoder: Implementation and complexity analysis. In ICIP, 2014. 6
271
+ [10] Ping Hu, Gang Wang, Xiangfei Kong, Jason Kuen, and Yap-Peng Tan. Motion-guided cascaded refinement network for video object segmentation. In CVPR, 2018. 2
272
+ [11] Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, and Thomas Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In CVPR, 2017. 2
273
+ [12] Cisco Visual Networking Index. Forecast and methodology, 2016-2021. White paper, Cisco public, 2017. 8
274
+ [13] Samvit Jain, Xin Wang, and Joseph Gonzalez. Accel: A corrective fusion network for efficient semantic segmentation on video. In CVPR, 2019. 2
275
+ [14] Didier Le Gall. Mpeg: A video compression standard for multimedia applications. Communications of the ACM, 1991. 1, 2
276
+ [15] Yong Jae Lee, Jaechul Kim, and Kristen Grauman. Key-segments for video object segmentation. In ICCV, 2011. 2
277
+ [16] Yule Li, Jianping Shi, and Dahua Lin. Low-latency video semantic segmentation. In CVPR, 2018. 2
278
+ [17] Jonathon Luiten, Paul Voigtlaender, and Bastian Leibe. Premvos: Proposal-generation, refinement and merging for video object segmentation. In ACCV, 2018. 1, 7
279
+ [18] Kevis-Kokitsi Maninis, Sergi Caelles, Yuhua Chen, Jordi Pont-Tuset, Laura Leal-Taixe, Daniel Cremers, and Luc
280
+
281
+ Van Gool. Video object segmentation without temporal information. TPAMI, 2017. 1
282
+ [19] Seoung Wug Oh, Joon-Young Lee, Kalyan Sunkavalli, and Seon Joo Kim. Fast video object segmentation by reference-guided mask propagation. In CVPR, 2018. 1, 2
283
+ [20] Seoung Wug Oh, Joon-Young Lee, Ning Xu, and Seon Joo Kim. Video object segmentation using space-time memory networks. In ICCV, 2019. 1, 2, 6, 7, 8
284
+ [21] Junting Pan, Chengyu Wang, Xu Jia, Jing Shao, Lu Sheng, Junjie Yan, and Xiaogang Wang. Video generation from single semantic label map. In CVPR, 2019. 7
285
+ [22] Matthieu Paul, Christoph Mayer, Luc Van Gool, and Radu Timofte. Efficient video semantic segmentation with labels propagation and refinement. In WACV, 2020. 2
286
+ [23] Federico Perazzi, Anna Khoreva, Rodrigo Benenson, Bernt Schiele, and Alexander Sorkine-Hornung. Learning video object segmentation from static images. In CVPR, 2017. 1, 2, 7
287
+ [24] F. Peruzzi, J. Pont-Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation. In CVPR, 2016. 6
288
+ [25] Jordi Pont-Tuset, Federico Perazzi, Sergi Caelles, Pablo Arbeláez, Alexander Sorkine-Hornung, and Luc Van Gool. The 2017 davis challenge on video object segmentation. arXiv:1704.00675, 2017. 6
289
+ [26] Fatih Porikli, Faisal Bashir, and Huifang Sun. Compressed domain video object segmentation. TCVT, 2009. 2
290
+ [27] Andreas Robinson, Felix Jaremo Lawin, Martin Danelljan, Fahad Shahbaz Khan, and Michael Felsberg. Learning fast and robust target models for video object segmentation. In CVPR, 2020. 1, 2, 6, 7
291
+ [28] Hongje Seong, Junhyuk Hyun, and Euntai Kim. Kernelized memory network for video object segmentation. In ECCV, 2020. 2
292
+ [29] Thomas Sikora. Thempeg-4 video standard verification model. TCSVT, 1997. 3
293
+ [30] Gary J. Sullivan, Jens-Rainer Ohm, Woo-Jin Han, and Thomas Wiegand. Overview of the high efficiency video coding (hevc) standard. TCSTV, 2012. 1, 3
294
+ [31] Vivienne Sze, Madhukar Budagavi, and Gary J Sullivan. High efficiency video coding (hevc). In Integrated circuit and systems, algorithms and architectures. 2014. 3
295
+ [32] Zhentao Tan, Bin Liu, Qi Chu, Hangshi Zhong, Yue Wu, Weihai Li, and Nenghai Yu. Real time video object segmentation in compressed domain. TCVT, 2020. 1, 2, 3, 7
296
+ [33] Zachary Teed and Jia Deng. Raft: Recurrent all-pairs field transforms for optical flow. In ECCV, 2020. 2, 7
297
+ [34] David Tsai, Matthew Flagg, Atsushi Nakazawa, and James M Rehg. Motion coherent tracking using multi-label mrf optimization. IJCV, 2012. 2
298
+ [35] Paul Voigtlaender and Bastian Leibe. Online adaptation of convolutional neural networks for video object segmentation. arXiv:1706.09364, 2017. 2
299
+ [36] Haochen Wang, Xiaolong Jiang, Haibing Ren, Yao Hu, and Song Bai. Swiftnet: Real-time video object segmentation. In CVPR, 2021. 7
300
+
301
+ [37] Qiang Wang, Li Zhang, Luca Bertinetto, Weiming Hu, and Philip HS Torr. Fast online object tracking and segmentation: A unifying approach. In CVPR, 2019. 1
302
+ [38] Shiyao Wang, Hongchao Lu, and Zhidong Deng. Fast object detection in compressed video. In ICCV, 2019. 2
303
+ [39] Thomas Wiegand, Gary J Sullivan, Gisle Bjontegaard, and Ajay Luthra. Overview of the h. 264/avc video coding standard. TCSVT, 2003. 3
304
+ [40] Chao-Yuan Wu, Manzil Zaheer, Hexiang Hu, R Manmatha, Alexander J Smola, and Philipp Krahenbuhl. Compressed video action recognition. In CVPR, 2018. 2, 7
305
+ [41] Mai Xu, Lai Jiang, Xiaoyan Sun, Zhaoting Ye, and Zulin Wang. Learning to detect video saliency with hevc features. TIP, 2017. 2
306
+ [42] Ning Xu, Linjie Yang, Yuchen Fan, Dingcheng Yue, Yuchen Liang, Jianchao Yang, and Thomas Huang. Youtu-vos: A large-scale video object segmentation benchmark. arXiv:1809.03327, 2018. 6
307
+ [43] Zongxin Yang, Yunchao Wei, and Yi Yang. Collaborative video object segmentation by multi-scale foreground-background integration. TPAMI, 2021. 1, 7
308
+ [44] Yizhuo Zhang, Zhirong Wu, Houwen Peng, and Stephen Lin. A transductive approach for video object segmentation. In CVPR, 2020. 1, 7
309
+ [45] Xizhou Zhu, Yuwen Xiong, Jifeng Dai, Lu Yuan, and Yichen Wei. Deep feature flow for video recognition. In CVPR, 2017. 2, 4, 7
acceleratingvideoobjectsegmentationwithcompressedvideo/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f393318e48e07b3bc575a09a5c5700a36d36bddfa32afe528de07bdf4a3be3b8
3
+ size 474348
acceleratingvideoobjectsegmentationwithcompressedvideo/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50df317f873cc0526c9bf7ec4c8de0d45ed28b1ecd3287ed67cbea179be854ef
3
+ size 470327
accurate3dbodyshaperegressionusingmetricandsemanticattributes/55090ba8-e47f-43e1-99bb-5d8bd426be4e_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c9b08d648d19162ff39455b99b8099f4857540373d8581287484dc0c11b3fe4
3
+ size 89389
accurate3dbodyshaperegressionusingmetricandsemanticattributes/55090ba8-e47f-43e1-99bb-5d8bd426be4e_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5205e6eea49af0d5b89f758e06188cd0b051490bb2afe48937530bc33aacff0e
3
+ size 110036
accurate3dbodyshaperegressionusingmetricandsemanticattributes/55090ba8-e47f-43e1-99bb-5d8bd426be4e_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e133a2dd31ea1940e1b48998c0f836a8f8f5f3ad736b2fa9648a6023ef6caab4
3
+ size 6902850
accurate3dbodyshaperegressionusingmetricandsemanticattributes/full.md ADDED
@@ -0,0 +1,379 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Accurate 3D Body Shape Regression using Metric and Semantic Attributes
2
+
3
+ Vasileios Choutas\*, Lea Muller\*, Chun-Hao P. Huang, Siyu Tang, Dimitrios Tzionas, Michael J. Black1 Max Planck Institute for Intelligent Systems, Tübingen, Germany ETH Zürich
4
+
5
+ {vchoutas, lea.mueller, paul.huang, stang, dtzionas, black}@tuebingen.mpg.de
6
+
7
+ * Equal contribution, alphabetical order
8
+
9
+ ![](images/8eb9eddcbe24844b215d5ab2c23fc96f0a9a75b630ef8640bcf184a129d7a177.jpg)
10
+ Figure 1. Existing work on 3D human reconstruction from a color image focuses mainly on pose. We present SHAPY, a model that focuses on body shape and learns to predict dense 3D shape from a color image, using crowd-sourced linguistic shape attributes. Even with this weak supervision, SHAPY outperforms the state of the art (SOTA) [52] on in-the-wild images with varied clothing.
11
+
12
+ ![](images/0f97e62b13ec2a238c6538f123bb0dfa14d89e50f9f150ec967f44c2fe1381ff.jpg)
13
+
14
+ # Abstract
15
+
16
+ While methods that regress 3D human meshes from images have progressed rapidly, the estimated body shapes often do not capture the true human shape. This is problematic since, for many applications, accurate body shape is as important as pose. The key reason that body shape accuracy lags pose accuracy is the lack of data. While humans can label 2D joints, and these constrain 3D pose, it is not so easy to "label" 3D body shape. Since paired data with images and 3D body shape are rare, we exploit two sources of information: (1) we collect internet images of diverse "fashion" models together with a small set of anthropometric measurements; (2) we collect linguistic shape attributes for a wide range of 3D body meshes and the model images. Taken together, these datasets provide sufficient constraints to infer dense 3D shape. We exploit the anthropometric measurements and linguistic shape attributes in several novel ways to train a neural network, called SHAPY, that regresses 3D human pose and shape from an RGB image.
17
+
18
+ We evaluate SHAPY on public benchmarks, but note that they either lack significant body shape variation, ground-truth shape, or clothing variation. Thus, we collect a new dataset for evaluating 3D human shape estimation, called HBW, containing photos of "Human Bodies in the Wild" for which we have ground-truth 3D body scans. On this new benchmark, SHAPY significantly outperforms state-of-the-art methods on the task of 3D body shape estimation. This is the first demonstration that 3D body shape regression from images can be trained from easy-to-obtain anthropometric measurements and linguistic shape attributes. Our model and data are available at: shapy.is.tue.mpg.de
19
+
20
+ # 1. Introduction
21
+
22
+ The field of 3D human pose and shape (HPS) estimation is progressing rapidly and methods now regress accurate 3D pose from a single image [7,26,28,30-33,43,65,67]. Un
23
+
24
+ fortunately, less attention has been paid to body shape and many methods produce body shapes that clearly do not represent the person in the image (Fig. 1, top right). There are several reasons behind this. Current evaluation datasets focus on pose and not shape. Training datasets of images with 3D ground-truth shape are lacking. Additionally, humans appear in images wearing clothing that obscures the body, making the problem challenging. Finally, the fundamental scale ambiguity in 2D images, makes 3D shape difficult to estimate. For many applications, however, realistic body shape is critical. These include AR/VR, apparel design, virtual try-on, and fitness. To democratize avatars, it is important to represent and estimate all possible 3D body shapes; we make a step in that direction.
25
+
26
+ Note that commercial solutions to this problem require users to wear tight fitting clothing and capture multiple images or a video sequence using constrained poses. In contrast, we tackle the unconstrained problem of 3D body shape estimation in the wild from a single RGB image of a person in an arbitrary pose and standard clothing.
27
+
28
+ Most current approaches to HPS estimation learn to regress a parametric 3D body model like SMPL [37] from images using 2D joint locations as training data. Such joint locations are easy for human annotators to label in images. Supervising the training with joints, however, is not sufficient to learn shape since an infinite number of body shapes can share the same joints. For example, consider someone who puts on weight. Their body shape changes but their joints stay the same. Several recent methods employ additional 2D cues, such as the silhouette, to provide additional shape cues [51, 52]. Silhouettes, however, are influenced by clothing and do not provide explicit 3D supervision. Synthetic approaches [35], on the other hand, drape SMPL 3D bodies in virtual clothing and render them in images. While this provides ground-truth 3D shape, realistic synthesis of clothed humans is challenging, resulting in a domain gap.
29
+
30
+ To address these issues, we present SHAPY, a new deep neural network that accurately regresses 3D body shape and pose from a single RGB image. To train SHAPY, we first need to address the lack of paired training data with real images and ground-truth shape. Without access to such data, we need alternatives that are easier to acquire, analogous to 2D joints used in pose estimation. To do so, we introduce two novel datasets and corresponding training methods.
31
+
32
+ First, in lieu of full 3D body scans, we use images of people with diverse body shapes for which we have anthropometric measurements such as height as well as chest, waist, and hip circumference. While many 3D human shapes can share the same measurements, they do constrain the space of possible shapes. Additionally, these are important measurements for applications in clothing and health. Accurate anthropometric measurements like these are difficult for individuals to take themselves but they are often captured for
33
+
34
+ ![](images/bfa030a30a9c767dee39f88889583eb9983c078cdcc3d6096c6507851e279c45.jpg)
35
+ Figure 2. Model-agency websites contain multiple images of models together with anthropometric measurements. A wide range of body shapes are represented; example from pexels.com.
36
+
37
+ ![](images/df34701c57dd82525ec83d97e5b16e4d27a6c2c47434947ab3e6f465d2102de9.jpg)
38
+
39
+ ![](images/27239126ffb078c545c39c53cac0c7cbf234931b65f1b381ff876c441a161533.jpg)
40
+
41
+ ![](images/20a26214dbe54451b4e4857b91ca3d9e6d587c422413edc28debe5e5f4a5756e.jpg)
42
+ Figure 3. We crowd-source scores for linguistic body-shape attributes [57] and compute anthropometric measurements for CAESAR [47] body meshes. We also crowd-source linguistic shape attribute scores for model images, like those in Fig. 2
43
+
44
+ ![](images/2108727c4ba1022b3ada9ff04d9cf935f8deb83af72b31acf848e0bbf23a0f48.jpg)
45
+
46
+ ![](images/59ee047256bb820b7fc69590de3d11acbff5c8fb33a13b3f93279a713b04d2f9.jpg)
47
+
48
+ different applications. Specifically, modeling agencies provide such information about their models; accuracy is a requirement for modeling clothing. Thus, we collect a diverse set of such model images (with varied ethnicity, clothing, and body shape) with associated measurements; see Fig. 2.
49
+
50
+ Since sparse anthropometric measurements do not fully constrain body shape, we exploit a novel approach and also use linguistic shape attributes. Prior work has shown that people can rate images of others according to shape attributes such as "short/tall", "long legs" or "pear shaped" [57]; see Fig. 3. Using the average scores from several raters, Streuber et al. [57] (BodyTalk) regress metrically accurate 3D body shape. This approach gives us a way to easily label images of people and use these labels to constrain 3D shape. To our knowledge, this sort of linguistic shape attribute data has not previously been exploited to train a neural network to infer 3D body shape from images.
51
+
52
+ We exploit these new datasets to train SHAPY with three novel losses, which can be exploited by any 3D human body reconstruction method: (1) We define functions of the SMPL body mesh that return a sparse set of anthropometric measurements. When measurements are available for an image we use a loss that penalizes mesh measurements that differ from the ground-truth (GT). (2) We learn a "Shape to
53
+
54
+ Attribute" (S2A) function that maps 3D bodies to linguistic attribute scores. During training, we map meshes to attribute scores and penalize differences from the GT scores. (3) We similarly learn a function that maps "Attributes to Shape" (A2S). We then penalize body shape parameters that deviate from the prediction.
55
+
56
+ We study each term in detail to arrive at the final method. Evaluation is challenging because existing benchmarks with GT shape either contain too few subjects [61] or have limited clothing complexity and only pseudo-GT shape [51]. We fill this gap with a new dataset, named "Human Bodies in the Wild" (HBW), that contains a ground-truth 3D body scan and several in-the-wild photos of 35 subjects, for a total of 2543 photos. Evaluation on this shows that SHAPY estimates much more accurate 3D shape.
57
+
58
+ Models, data and code are available at shapy.is.tue.mpg.de.
59
+
60
+ # 2. Related Work
61
+
62
+ 3D human pose and shape (HPS): Methods that reconstruct 3D human bodies from one or more RGB images can be split into two broad categories: (1) parametric methods that predict parameters of a statistical 3D body model, such as SCAPE [3], SMPL [37], SMPL-X [43], Adam [26], GHUM [65], and (2) non-parametric methods that predict a free-form representation of the human body [24, 50, 59, 64]. Parametric approaches lack details w.r.t. non-parametric ones, e.g., clothing or hair. However, parametric models disentangle the effects of identity and pose on the overall shape. Therefore, their parameters provide control for re-shaping and re-posing. Moreover, pose can be factored out to bring meshes in a canonical pose; this is important for evaluating estimates of an individual's shape. Finally, since topology is fixed, meshes can be compared easily. For these reasons, we use a SMPL-X body model.
63
+
64
+ Parametric methods follow two main paradigms, and are based on optimization or regression. Optimization-based methods [5, 7, 16, 43] search for model configurations that best explain image evidence, usually 2D landmarks [8], subject to model priors that usually encourage parameters to be close to the mean of the model space. Numerous methods penalize the discrepancy between the projected and ground-truth silhouettes [22, 34] to estimate shape. However, this needs special care to handle clothing [4]; without this, erroneous solutions emerge that "inflate" body shape to explain the "clothed" silhouette. Regression-based methods [9, 14, 25, 27, 30, 33, 35, 40, 66] are currently based on deep neural networks that directly regress model parameters from image pixels. Their training sets are a mixture of data captured in laboratory settings [23, 56], with model parameters estimated from MoCap markers [39], and in-the-wild image collections, such as COCO [36], that contain 2D keypoint annotations. Optimization and regression can be combined, for example via in-the-network model fitting [33, 40].
65
+
66
+ Estimating 3D body shape: State-of-the-art methods are effective for estimating 3D pose, but struggle with estimating body shape under clothing. There are several reasons for this. First, 2D keypoints alone are not sufficient to fully constrain 3D body shape. Second, shape priors address the lack of constraints, but bias solutions towards "average" shapes [7,33,40,43]. Third, datasets with in-the-wild images have noisy 3D bodies, recovered by fitting a model to 2D keypoints [7,43]. Fourth, datasets captured in laboratory settings have a small number of subjects, who do not represent the full spectrum of body shapes. Thus, there is a scarcity of images with known, accurate, 3D body shape. Existing methods deal with this in two ways.
67
+
68
+ First, rendering synthetic images is attractive since it gives automatic and precise ground-truth annotation. This involves shaping, posing, dressing and texturing a 3D body model [20,51,53,60,62], then lighting it and rendering it in a scene. Doing this realistically and with natural clothing is expensive, hence, current datasets suffer from a domain gap. Alternative methods use artist-curated 3D scans [42,49,50], which are realistic but limited in variety.
69
+
70
+ Second, 2D shape cues for in-the-wild images, (body-part segmentation masks [12,41,48], silhouettes [1,22,44]) are attractive, as these can be manually annotated or automatically detected [15, 18]. However, fitting to such cues often gives unrealistic body shapes, by inflating the body to "explain" the clothing "baked" into silhouettes and masks.
71
+
72
+ Most related to our work is the work of Sengupta et al. [51-53] who estimate body shape using a probabilistic learning approach, trained on edge-filtered synthetic images. They evaluate on the SSP-3D dataset of real images with pseudo-GT 3D bodies, estimated by fitting SMPL to multiple video frames. SSP-3D is biased to people with tight-fitting clothing. Their silhouette-based method works well on SSP-3D but does not generalize to people in normal clothing, tending to over-estimate body shape; see Fig. 1.
73
+
74
+ In contrast to previous work, SHAPY is trained with in-the-wild images paired with linguistic shape attributes, which are annotations that can be easily crowd-sourced for weak shape supervision. We also go beyond SSP-3D to provide HBW, a new dataset with in-the-wild images, varied clothing, and precise GT from 3D scans.
75
+
76
+ Shape, measurements and attributes: Body shapes can be generated from anthropometric measurements [2, 54, 55]. Tsoli et al. [58] register a body model to multiple high-resolution body scans to extract body measurements. The "Virtual Caliper" [46] allows users to build metrically accurate avatars of themselves using measurements or VR game controllers. ViBE [21] collects images, measurements (bust, waist, hip circumference, height) and the dress-size of models from clothing websites to train a clothing recommendation network. We draw inspiration from these approaches for data collection and supervision.
77
+
78
+ ![](images/6cca75dc54f920624027b4ea05ed644d59801323e931a3f077ee865c3f01127f.jpg)
79
+ Figure 4. Shape representations and data collection. Our goal is 3D body shape estimation from in-the-wild images. Collecting data for direct supervision is difficult and does not scale. We explore two alternatives. Linguistic Shape Attributes: We annotate attributes ("A") for CAESAR meshes, for which we have accurate shape ("S") parameters, and learn the "A2S" and "S2A" models, to map between these representations. Attribute annotations for images can be easily crowd-sourced, making these scalable. Anthropometric Measurements: We collect images with sparse body measurements from model-agency websites. A virtual measurement module [46] computes the measurements from 3D meshes. Training: We combine these sources to learn a regressor with weak supervision that infers 3D shape from an image.
80
+
81
+ Streuber et al. [57] learn BodyTalk, a model that generates 3D body shapes from linguistic attributes. For this, they select attributes that describe human shape and ask annotators to rate how much each attribute applies to a body. They fit a linear model that maps attribute ratings to SMPL shape parameters. Inspired by this, we collect attribute ratings for CAESAR meshes [47] and in-the-wild data as proxy shape supervision to train a HPS regressor. Unlike BodyTalk, SHAPY automatically infers shape from images.
82
+
83
+ Anthropometry from images: Single-View metrology [10] estimates the height of a person in an image, using horizontal and vertical vanishing points and the height of a reference object. Günel et al. [17] introduce the IMDB-23K dataset by gathering publicly available celebrity images and their height information. Zhu et al. [68] use this dataset to learn to predict the height of people in images. Dey et al. [11] estimate the height of users in a photo collection by computing height differences between people in an image, creating a graph that links people across photos, and solving a maximum likelihood estimation problem. Bieler et al. [6] use gravity as a prior to convert pixel measurements extracted from a video to metric height. These methods do not address body shape.
84
+
85
+ # 3. Representations & Data for Body Shape
86
+
87
+ We use linguistic shape attributes and anthropometric measurements as a connecting component between in-the-wild images and ground-truth body shapes; see Fig. 4. To that end, we annotate linguistic shape attributes for 3D meshes and in-the-wild images, the latter from fashion-model agencies, labeled via Amazon Mechanical Turk.
88
+
89
+ ![](images/68c62eaebd0ba1d7f424c9500dba7f0418a9a1506e5a6d492f7972cfda8fef19.jpg)
90
+ Figure 5. Histogram of height and chest/waist/hips circumference for data from model-agency websites (Sec. 3.2) and CAESAR. Model-agency data is diverse, yet not as much as CAESAR data.
91
+
92
+ # 3.1. SMPL-X Body Model
93
+
94
+ We use SMPL-X [43], a differentiable model that maps shape, $\beta$ , pose, $\theta$ , and expression, $\psi$ , parameters to a 3D mesh, $M$ , with $N = 10,475$ vertices, $V$ . The shape vector $\beta \in \mathbb{R}^B$ ( $B \leq 300$ ) has coefficients of a low-dimensional PCA space. The vertices are posed with linear blend skimming with a learned rigged skeleton, $X \in \mathbb{R}^{55 \times 3}$ .
95
+
96
+ # 3.2. Model-Agency Images
97
+
98
+ Model agencies typically provide multiple color images of each model, in various poses, outfits, hairstyles, scenes, and with a varying camera framing, together with anthropometric measurements and clothing size. We collect training data from multiple model-agency websites, focusing on under-represented body types, namely: curve-models.com, cocaine models.com, nemesismodels.com, jayjay-models.de, kultmodels.com, modelwerk.de, models1.co.uk. showcase.de, the-models.de, and ullamodels.com. In addition to photos, we store gender and four anthropometric measurements, i.e. height, chest, waist and hip circumference, when available. To avoid having the same subject in both the training and test set, we match model identities across websites to identify models that work for several agencies. For details, see Sup. Mat.
99
+
100
+ After identity filtering, we have 94,620 images of 4,419 models along with their anthropometric measurements. However, the distributions of these measurements, shown in Fig. 5, reveal a bias for "fashion model" body shapes, while other body types are under-represented in comparison to CAESAR [47]. To enhance diversity in body-shapes and avoid strong biases and log tails, we compute the quantized 2D-distribution for height and weight and sample up to 3 models per bin. This results in $N = 1$ , 185 models (714 females, 471 males) and 20, 635 images.
101
+
102
+ # 3.3. Linguistic Shape Attributes
103
+
104
+ Human body shape can be described by linguistic shape attributes [19]. We draw inspiration from Streuber et al. [57] who collect scores for 30 linguistic attributes for
105
+
106
+ <table><tr><td colspan="2">Male &amp; Female</td><td>Male only</td><td>Female only</td></tr><tr><td>short</td><td>long neck</td><td>skinny arms</td><td>pear shaped</td></tr><tr><td>big</td><td>long legs</td><td>average</td><td>petite</td></tr><tr><td>tall</td><td>long torso</td><td>rectangular</td><td>slim waist</td></tr><tr><td>muscular</td><td>short arms</td><td>delicate build</td><td>large breasts</td></tr><tr><td></td><td>broad shoulders</td><td>soft body</td><td>skinny legs</td></tr><tr><td></td><td></td><td>masculine</td><td>feminine</td></tr></table>
107
+
108
+ Table 1. Linguistic shape attributes for human bodies. Some attributes apply to both genders, but others are gender specific.
109
+
110
+ 256 3D body meshes, generated by sampling SMPL's shape space, to train a linear "attribute to shape" regressor. In contrast, we train a model that takes as input an image, instead of attributes, and outputs an accurate 3D shape (and pose).
111
+
112
+ We crowd-source linguistic attribute scores for a variety of body shapes, using images from the following sources:
113
+
114
+ Rendered CAESAR images: We use CAESAR [47] bodies to learn mappings between linguistic shape attributes, anthropometric measurements, and SMPL-X shape parameters, $\beta$ . Specifically, we register a "gendered" SMPL-X model with 100 shape components to 1, 700 male and 2, 102 female 3D scans, pose all meshes in an A-pose, and render synthetic images with the same virtual camera.
115
+
116
+ Model-agency photos: Each annotator is shown 3 body images per subject, sampled from the image pool of Sec. 3.2.
117
+
118
+ Annotation: To keep annotation tractable, we use $A = 15$ linguistic shape attributes per gender (subset of BodyTalk's [57] attributes); see Tab. 1. Each image is annotated by $K = 15$ annotators on Amazon Mechanical Turk. Their task is to "indicate how strongly [they] agree or disagree that the [listed] words describe the shape of the [depicted] person's body"; for an example, see Sup. Mat. Annotations range on a discrete 5-level Likert scale from 1 (strongly disagree) to 5 (strongly agree). We get a rating matrix $\mathbf{A} \in \{1,2,3,4,5\}^{N \times A \times K}$ , where $N$ is the number of subjects. In the following, $a_{ijk}$ denotes an element of $\mathbf{A}$ .
119
+
120
+ # 4. Mapping Shape Representations
121
+
122
+ In Sec. 3 we introduce three body-shape representations: (1) SMPL-X's PCA shape space (Sec. 3.1), (2) anthropometric measurements (Sec. 3.2), and (3) linguistic shape attribute scores (Sec. 3.3). Here we learn mappings between these, so that in Sec. 5 we can define new losses for training body shape regressors using multiple data sources.
123
+
124
+ # 4.1. Virtual Measurements (VM)
125
+
126
+ We obtain anthropometric measurements from a 3D body mesh in a T-posed, namely height, $H(\beta)$ , weight, $W(\beta)$ , and chest, waist and hip circumferences, $C_{\mathrm{c}}(\beta)$ , $C_{\mathrm{w}}(\beta)$ , and $C_{\mathrm{h}}(\beta)$ , respectively, by following Wuhrer et al. [63] and the "Virtual Caliper" [46]. For details on how we compute these measurements, see Sup. Mat.
127
+
128
+ # 4.2. Attributes and 3D Shape
129
+
130
+ Attributes to Shape (A2S): We predict SMPL-X shape coefficients from linguistic attribute scores with a second-degree polynomial regression model. For each shape $\beta_{i}$ , $i = 1\ldots N$ , we create a feature vector, $\mathbf{x}_i^{\mathrm{A2S}}$ , by averaging for each of the $A$ attributes the corresponding $K$ scores:
131
+
132
+ $$
133
+ \mathbf {x} _ {i} ^ {\mathrm {A 2 S}} = \left[ \bar {a} _ {i, 1}, \dots , \bar {a} _ {i, A} \right], \quad \bar {a} _ {i, j} = \frac {1}{K} \sum_ {k = 1} ^ {K} a _ {i j k}, \tag {1}
134
+ $$
135
+
136
+ where $i$ is the shape index (list of "fashion" or CAESAR bodies), $j$ is the attribute index, and $k$ the annotation index. We then define the full feature matrix for all $N$ shapes as:
137
+
138
+ $$
139
+ \mathbf {X} ^ {\mathrm {A 2 S}} = \left[ \phi \left(\mathbf {x} _ {1} ^ {\mathrm {A 2 S}}\right), \quad \dots , \quad \phi \left(\mathbf {x} _ {N} ^ {\mathrm {A 2 S}}\right) \right] ^ {\top}, \tag {2}
140
+ $$
141
+
142
+ where $\phi (\mathbf{x}_i^{\mathrm{A2S}})$ maps $\mathbf{x}_i$ to $2^{\mathrm{nd}}$ order polynomial features. The target matrix $\mathbf{Y} = [\beta_{1},\dots,\beta_{N}]^{\top}$ contains the shape parameters $\beta_{i} = [\beta_{i,1},\dots,\beta_{i,B}]^{\top}$ . We compute the polynomial model's coefficients $\mathbf{W}$ via least-squares fitting:
143
+
144
+ $$
145
+ \mathbf {Y} = \mathbf {X} \mathbf {W} + \epsilon . \tag {3}
146
+ $$
147
+
148
+ Empirically, the polynomial model performs better than several models that we evaluated; for details, see Sup. Mat.
149
+
150
+ Shape to Attributes (S2A): We predict linguistic attribute scores, $A$ , from SMPL-X shape parameters, $\beta$ . Again, we fit a second-degree polynomial regression model. S2A has "swapped" inputs and outputs w.r.t. A2S:
151
+
152
+ $$
153
+ \mathbf {x} _ {i} ^ {\mathrm {S 2 A}} = \left[ \boldsymbol {\beta} _ {i, 1}, \dots , \boldsymbol {\beta} _ {i, B} \right], \tag {4}
154
+ $$
155
+
156
+ $$
157
+ \mathbf {y} _ {i} = \left[ \bar {a} _ {i, 1}, \dots , \bar {a} _ {i, A} \right] ^ {\top}. \tag {5}
158
+ $$
159
+
160
+ Attributes & Measurements to Shape (AHWC2S): Given a sparse set of anthropometric measurements, we predict SMPL-X shape parameters, $\beta$ . The input vector is:
161
+
162
+ $$
163
+ \mathbf {x} _ {i} ^ {\mathrm {H W C 2 S}} = \left[ h _ {i}, w _ {i}, c _ {c _ {i}}, c _ {w _ {i}}, c _ {h _ {i}} \right], \tag {6}
164
+ $$
165
+
166
+ where $c_{c}, c_{w}, c_{h}$ is the chest, waist, and hip circumference, respectively, $h$ and $w$ are the height and weight, and HWC2S means Height + Weight + Circumference to Shape. The regression target is the SMPL-X shape parameters, $\mathbf{y}_i$ .
167
+
168
+ When both Attributes and measurements are available, we combine them for the AHWC2S model with input:
169
+
170
+ $$
171
+ \mathbf {x} _ {i} ^ {\mathrm {A H W C 2 S}} = \left[ \bar {a} _ {i, 1}, \dots , \bar {a} _ {i, A}, h _ {i}, w _ {i}, c _ {c _ {i}}, c _ {w _ {i}}, c _ {h _ {i}} \right]. \tag {7}
172
+ $$
173
+
174
+ In practice, depending on which measurements are available, we train and use different regressors. Following the naming convention of AHWC2S, these models are: AH2S, AHW2S, AC2S, and AHC2S, as well as their equivalents without attribute input H2S, HW2S, C2S, and HC2S. For an evaluation of the contribution of linguistic shape attributes on top of each anthropometric measurement, see Sup. Mat.
175
+
176
+ Training Data: To train the A2S and S2A mappings we use CAESAR data, for which we have SMPL-X shape parameters, anthropometric measurements, and linguistic attribute scores. We train separate gender-specific models.
177
+
178
+ ![](images/2505352f6792ed72f95626b73ec73e1689dddf3d60e5884b87855504daccfa8a.jpg)
179
+ Figure 6. SHAPY first estimates shape, $\hat{\beta}$ , and pose, $\hat{\theta}$ . Shape is used by: (1) our virtual anthropometric measurement (VM) module to compute height, $\hat{H}$ , and circumferences, $\hat{C}$ , and (2) our S2A module to infer linguistic attribute scores, $\hat{A}$ . There are several SHAPY variations, e.g., SHAPY-H uses only VM to infer $\hat{H}$ , while SHAPY-HA uses VM to infer $\hat{H}$ and S2A to infer $\hat{A}$ .
180
+
181
+ # 5. 3D Shape Regression from an Image
182
+
183
+ We present SHAPY, a network that predicts SMPL-X parameters from an RGB image with more accurate body shape than existing methods. To improve the realism and accuracy of shape, we explore training losses based on all shape representations discussed above, i.e., SMPL-X meshes (Sec. 3.1), linguistic attribute scores (Sec. 3.3) and anthropometric measurements (Sec. 4.1). In the following, symbols with/-out a hat are regressed/ground-truth values.
184
+
185
+ We convert shape $\hat{\beta}$ to height and circumferences values $\{\hat{H},\hat{C}_{\mathrm{c}},\hat{C}_{\mathrm{w}},\hat{C}_{\mathrm{h}}\} = \{H(\hat{\beta}),C_{\mathrm{c}}(\hat{\beta}),C_{\mathrm{w}}(\hat{\beta}),C_{\mathrm{h}}(\hat{\beta})\}$ , by applying our virtual measurement tool (Sec. 4.1) to the mesh $M(\hat{\beta})$ in the canonical T-posed. We also convert shape $\hat{\beta}$ to linguistic attribute scores, with $\hat{A} = \mathrm{S2A}(\hat{\beta})$
186
+
187
+ We train various SHAPY versions with the following "SHAPY losses", using either linguistic shape attributes, or anthropometric measurements, or both:
188
+
189
+ $$
190
+ L _ {\text {a t t r}} = \left\| A - \hat {A} \right\| _ {2} ^ {2}, \tag {8}
191
+ $$
192
+
193
+ $$
194
+ L _ {\text {h e i g h t}} = \left\| H - \hat {H} \right\| _ {2} ^ {2}, \tag {9}
195
+ $$
196
+
197
+ $$
198
+ L _ {\text {c i r c}} = \sum_ {i \in \{c, w, h \}} | | C _ {i} - \hat {C} _ {i} | | _ {2} ^ {2} \tag {10}
199
+ $$
200
+
201
+ These are optionally added to a base loss, $L_{\mathrm{base}}$ , defined below in "training details". The architecture of SHAPY, with all optional components, is shown in Fig. 6. A suffix of color-coded letters describes which of the above losses are used when training a model. For example, SHAPY-AH denotes a model trained with the attribute and height losses, i.e.: $L_{\mathrm{SHAPY - AH2S}} = L_{\mathrm{base}} + L_{\mathrm{attr}} + L_{\mathrm{height}}$ .
202
+
203
+ Training Details: We initialize SHAPY with the ExPose [9] network weights and use curated fits [9], H3.6M [23], the SPIN [33] training data, and our model-agency dataset (Sec. 3.2) for training. In each batch, $50\%$ of the images are sampled from the model-agency images, for which we ensure a gender balance. The "SHAPY losses" of Eqs. (8) to (10) are applied only on the model-agency images. We use these on top of a standard base loss:
204
+
205
+ $$
206
+ L _ {\text {b a s e}} = L _ {\text {p o s e}} + L _ {\text {s h a p e}}, \tag {11}
207
+ $$
208
+
209
+ where $L_{\mathrm{joint}}^{2\mathrm{D}}$ and $L_{\mathrm{joint}}^{3\mathrm{D}}$ are 2D and 3D joint losses:
210
+
211
+ $$
212
+ L _ {\text {p o s e}} = L _ {\text {j o i n t s}} ^ {2 \mathrm {D}} + L _ {\text {j o i n t s}} ^ {3 \mathrm {D}} + L _ {\boldsymbol {\theta}}, \tag {12}
213
+ $$
214
+
215
+ $$
216
+ L _ {\text {s h a p e}} = L _ {\beta} + L _ {\beta} ^ {\text {p r i o r}}, \tag {13}
217
+ $$
218
+
219
+ $L_{\theta}$ and $L_{\beta}$ are losses on pose and shape parameters, and $L_{\beta}^{\mathrm{prior}}$ is pixIE's [13] "gendered" shape prior. All losses are L2, unless otherwise explicitly specified. Losses on SMPL-X parameters are applied only on the pose data [9, 23, 33]. For more implementation details, see Sup. Mat.
220
+
221
+ # 6. Experiments
222
+
223
+ # 6.1. Evaluation Datasets
224
+
225
+ 3D_Poses in the Wild (3DPW) [61]: We use this to evaluate pose accuracy. This is widely used, but has only 5 test subjects, i.e., limited shape variation. For results, see Sup. Mat. Sports Shape and Pose 3D (SSP-3D) [51]: We use this to evaluate 3D body shape accuracy from images. It has 62 tightly-clothed subjects in 311 in-the-wild images from Sports-1M [29], with pseudo ground-truth SMPL meshes that we convert to SMPL-X for evaluation.
226
+
227
+ Model Measurements Test Set (MMTS): We use this to evaluate anthropometric measurement accuracy, as a proxy for body shape accuracy. To create MMTS, we withhold 2699/1514 images of 143/95 female/male identities from our model-agency data, described in Sec. 3.2
228
+
229
+ CAESAR Meshes Test Set (CMTS): We use CAESAR to measure the accuracy of SMPL-X body shapes and linguistic shape attributes for the models of Sec. 4. Specifically, we compute: (1) errors for SMPL-X meshes estimated from linguistic shape attributes and/or anthropometric measurements by A2S and its variations, and (2) errors for linguistic shape attributes estimated from SMPL-X meshes by S2A. To create an unseen mesh test set, we withhold 339 male and 410 female CAESAR meshes from the crowd-sourced CAESAR linguistic shape attributes, described in Sec. 3.3.
230
+
231
+ Human Bodies in the Wild (HBW): The field is missing a dataset with varied bodies, varied clothing, in-the-wild images, and accurate 3D shape ground truth. We fill this gap by collecting a novel dataset, called "Human Bodies in the Wild" (HBW), with three steps: (1) We collect accurate 3D body scans for 35 subjects (20 female, 15 male), and register a "gendered" SMPL-X model to these to recover 3D SMPL-X ground-truth bodies [45]. (2) We take photos of each subject in "photo-lab" settings, i.e., in front of a white background with controlled lighting, and in various everyday outfits and "fashion" poses. (3) Subjects upload full-body photos of themselves taken in the wild. For each subject we take up to 111 photos in lab settings, and collect up to 126 in-the-wild photos. In total, HBW has 2543 photos, 1,318 in the lab setting and 1,225 in the wild. We split the data into a validation and a test
232
+
233
+ ![](images/047559201dd4fd469f538cee17b928c0fe2ed2530ad1d116ad334ecd78f2b47b.jpg)
234
+ Figure 7. "Human Bodies in the Wild" (HBW) color images, taken in the lab and in the wild, and the SMPL-X ground-truth shape.
235
+
236
+ set (val/test) with 10/25 subjects (6/14 female 4/11 male) and 781/1,762 images (432/983 female 349/779 male), respectively. Figure 7 shows a few HBW subjects, photos and their SMPL-X ground-truth shapes. All subjects gave prior written informed consent to participate in this study and to release the data. The study was reviewed by the ethics board of the University of Tübingen, without objections.
237
+
238
+ # 6.2. Evaluation Metrics
239
+
240
+ We use standard accuracy metrics for 3D body pose, but also introduce metrics specific to 3D body shape.
241
+
242
+ Anthropometric Measurements: We report the mean absolute error in mm between ground-truth and estimated measurements, computed as described in Sec. 4.1. When weight is available, we report the mean absolute error in kg. MPJPE and V2V metrics: We report in Sup. Mat. the mean per-joint point error (MPJPE) and mean vertex-to-vertex error (V2V), when SMPL-X meshes are available. The prefix "PA" denotes metrics after Procrustes alignment. Mean point-to-point error $(\mathbf{P}2\mathbf{P}_{20\mathbf{K}})$ : SMPL-X has a highly non-uniform vertex distribution across the body, which negatively biases the mean vertex-to-vertex (V2V) error, when comparing estimated and ground-truth SMPL-X meshes. To account for this, we evenly sample 20K points on SMPL-X's surface, and report the mean point-to-point $(\mathrm{P2P_{20K}})$ error. For details, see Sup. Mat.
243
+
244
+ # 6.3. Shape-Representation Mappings
245
+
246
+ We evaluate the models A2S and S2A, which map between the various body shape representations (Sec. 4).
247
+
248
+ A2S and its variations: How well can we infer 3D body shape from just linguistic shape attributes, anthropometric measurements, or both of these together? In Tab. 2, we report reconstruction and measurement errors using many combinations of attributes (A), height (H), weight (W), and circumferences (C). Evaluation on CMTS data shows that attributes improve the overall shape prediction across the board. For example, height+attributes (AH2S) has a lower point-to-point error than height alone. The best performing model, AHWC, uses everything, with $\mathrm{P2P_{20K}}$ -errors of $5.8 \pm 2.0 \mathrm{~mm}$ (males) and $6.2 \pm 2.4 \mathrm{~mm}$ (females).
249
+
250
+ <table><tr><td></td><td>Method</td><td>P2P20K(mm)</td><td>Height(mm)</td><td>Weight(kg)</td><td>Chest(mm)</td><td>Waist(mm)</td><td>Hips(mm)</td></tr><tr><td rowspan="11">Male subjects</td><td>A2S</td><td>11.1 ± 5.2</td><td>29 ± 21</td><td>5 ± 4</td><td>30 ± 22</td><td>32 ± 24</td><td>28 ± 21</td></tr><tr><td>H2S</td><td>12.1 ± 6.1</td><td>5 ± 4</td><td>11 ± 11</td><td>81 ± 66</td><td>102 ± 87</td><td>40 ± 33</td></tr><tr><td>AH2S</td><td>6.8 ± 2.3</td><td>4 ± 3</td><td>3 ± 3</td><td>27 ± 21</td><td>29 ± 23</td><td>24 ± 18</td></tr><tr><td>HW2S</td><td>8.1 ± 2.7</td><td>5 ± 4</td><td>1 ± 1</td><td>24 ± 17</td><td>26 ± 20</td><td>21 ± 18</td></tr><tr><td>AHW2S</td><td>6.3 ± 2.1</td><td>4 ± 3</td><td>1 ± 1</td><td>19 ± 15</td><td>19 ± 14</td><td>20 ± 16</td></tr><tr><td>C2S</td><td>19.7 ± 11.1</td><td>59 ± 47</td><td>9 ± 8</td><td>55 ± 41</td><td>63 ± 49</td><td>37 ± 28</td></tr><tr><td>AC2S</td><td>9.6 ± 4.4</td><td>25 ± 19</td><td>3 ± 3</td><td>23 ± 19</td><td>21 ± 17</td><td>18 ± 14</td></tr><tr><td>HC2S</td><td>7.7 ± 2.6</td><td>5 ± 4</td><td>2 ± 2</td><td>28 ± 23</td><td>18 ± 15</td><td>13 ± 11</td></tr><tr><td>AHC2S</td><td>6.0 ± 2.0</td><td>4 ± 3</td><td>2 ± 2</td><td>21 ± 17</td><td>17 ± 14</td><td>13 ± 10</td></tr><tr><td>HWC2S</td><td>7.3 ± 2.6</td><td>5 ± 4</td><td>1 ± 1</td><td>20 ± 15</td><td>14 ± 12</td><td>13 ± 11</td></tr><tr><td>AHWC2S</td><td>5.8 ± 2.0</td><td>4 ± 3</td><td>1 ± 1</td><td>16 ± 13</td><td>13 ± 10</td><td>13 ± 10</td></tr></table>
251
+
252
+ Table 2. Results of A2S variants on CMTS for male subjects, using the male SMPL-X model. For females, see Sup. Mat.
253
+
254
+ <table><tr><td>Method</td><td>Model</td><td>Height</td><td>Chest</td><td>Waist</td><td>Hips</td><td>P2P20K</td></tr><tr><td>SMPLR [38]</td><td>SMPL</td><td>182</td><td>267</td><td>309</td><td>305</td><td>69</td></tr><tr><td>STRAPS [51]</td><td>SMPL</td><td>135</td><td>167</td><td>145</td><td>102</td><td>47</td></tr><tr><td>SPIN [33]</td><td>SMPL</td><td>59</td><td>92</td><td>78</td><td>101</td><td>29</td></tr><tr><td>TUCH [40]</td><td>SMPL</td><td>58</td><td>89</td><td>75</td><td>57</td><td>26</td></tr><tr><td>Sengupta et al. [52]</td><td>SMPL</td><td>82</td><td>133</td><td>107</td><td>63</td><td>32</td></tr><tr><td>ExPose [9]</td><td>SMPL-X</td><td>85</td><td>99</td><td>92</td><td>94</td><td>35</td></tr><tr><td>SHAPY (ours)</td><td>SMPL-X</td><td>51</td><td>65</td><td>69</td><td>57</td><td>21</td></tr></table>
255
+
256
+ Table 3. Evaluation on the HBW test set in mm. We compute the measurement and point-to-point $(\mathrm{P2P_{20K}})$ error between predicted and ground-truth SMPL-X meshes.
257
+
258
+ S2A: How well can we infer linguistic shape attributes from 3D shape? S2A's accuracy on inferring the attribute Likert score is $75\% / 69\%$ for males/females; details in Sup. Mat.
259
+
260
+ # 6.4. 3D Shape from an Image
261
+
262
+ We evaluate all of our model's variations (see Sec. 5) on the HBW validation set and find, perhaps surprisingly, that SHAPY-A outperforms other variants. We refer to this below (and Fig. 1) simply as "SHAPY" and report its performance in Tab. 3 for HBW, Tab. 4 for MMTS, and Tab. 5 for SSP-3D. For images with natural and varied clothing (HBW, MMTS), SHAPY significantly outperforms all other methods (Tabs. 3 and 4) using only weak 3D shape supervision (Attributes). On these images, Sengupta et al.'s method [52] struggles with the natural clothing. In contrast, their method is more accurate than SHAPY on SSP-3D (Tab. 5), which has tight "sports" clothing, in terms of PVE-T-SC, a scale-normalized metric used on this dataset. These results show that silhouettes are good for tight/minimal clothing and that SHAPY struggles with high BMI shapes due to the lack of such shapes in our training data; see Fig. 5. Note that, as HBW has true ground-truth 3D shape, it does not need SSP-3D's scaling for evaluation.
263
+
264
+ A key observation is that training with linguistic shape attributes alone is sufficient, i.e., without anthropometric measurements. Importantly, this opens up the possibility for significantly larger data collections. For a study of how different measurements or attributes impact accuracy, see Sup. Mat. Figure 8 shows SHAPY's qualitative results.
265
+
266
+ ![](images/1e1779b2ff7f968a7e0e516ce457ecc5ef7a309346f5198dbc11d526293fb161.jpg)
267
+ Figure 8. Qualitative results from HBW. From left to right: RGB, ground-truth shape, SHAPY and Sengupta et al. [52]. For example, in the upper- and lower- right images, SHAPY is less affected by pose variation and loose clothing.
268
+
269
+ ![](images/92f74e383f2b24024243d879120cf70015e814c6f7091e3cb996772cb35dc24b.jpg)
270
+
271
+ ![](images/277364a3ca64580fdf8cebb0967ccd582740298bf54c20f7007873f61a86ee85.jpg)
272
+
273
+ ![](images/f586dc83fde4030ff98dbe1ceb185545c90c5483f15f81d890d43f8b8a4b6d21.jpg)
274
+
275
+ ![](images/41586e20f72c2d39d03720933332e508f3f33c3bbb1a37dbb8d2d866354152e1.jpg)
276
+
277
+ ![](images/07c6453afac7248776ba39861d6db0a5bbfff78ca9d8278c7909b947310e7255.jpg)
278
+
279
+ ![](images/9f4dcd062913eb20cd99eb8cc4857c4b1a89d3763a64b79d5eaf4c96dfe76d3f.jpg)
280
+
281
+ ![](images/359c27a9ce6e697b91574bbc7458438b9632e3ed9f85aa851967a5622c2b3673.jpg)
282
+
283
+ <table><tr><td rowspan="2">Method</td><td rowspan="2">Model</td><td colspan="4">Mean absolute error (mm) ↓</td></tr><tr><td>Height</td><td>Chest</td><td>Waist</td><td>Hips</td></tr><tr><td>Sengupta et al. [52]</td><td>SMPL</td><td>84</td><td>186</td><td>263</td><td>142</td></tr><tr><td>TUCH [40]</td><td>SMPL</td><td>82</td><td>92</td><td>129</td><td>91</td></tr><tr><td>SPIN [33]</td><td>SMPL</td><td>72</td><td>91</td><td>129</td><td>101</td></tr><tr><td>STRAPS [51]</td><td>SMPL</td><td>207</td><td>278</td><td>326</td><td>145</td></tr><tr><td>ExPose [9]</td><td>SMPL-X</td><td>107</td><td>107</td><td>136</td><td>92</td></tr><tr><td>SHAPY (ours)</td><td>SMPL-X</td><td>71</td><td>64</td><td>98</td><td>74</td></tr></table>
284
+
285
+ Table 4. Evaluation on MMTS. We report the mean absolute error between ground-truth and estimated measurements.
286
+
287
+ <table><tr><td>Method</td><td>Model</td><td>PVE-T-SC</td><td>mIOU</td></tr><tr><td>HMR [27]</td><td>SMPL</td><td>22.9</td><td>0.69</td></tr><tr><td>SPIN [33]</td><td>SMPL</td><td>22.2</td><td>0.70</td></tr><tr><td>STRAPS [51]</td><td>SMPL</td><td>15.9</td><td>0.80</td></tr><tr><td>Sengupta et al. [52]</td><td>SMPL</td><td>13.6</td><td>-</td></tr><tr><td>SHAPY (ours)</td><td>SMPL-X</td><td>19.2</td><td>-</td></tr></table>
288
+
289
+ Table 5. Evaluation on the SSP-3D test set [51]. We report the scaled mean vertex-to-vertex error in T-pose [51], and mIOU.
290
+
291
+ # 7. Conclusion
292
+
293
+ SHAPY is trained to regress more accurate human body shape from images than previous methods, without explicit 3D shape supervision. To achieve this, we present two different ways to collect proxy annotations for 3D body shape for in-the-wild images. First, we collect sparse anthropometric measurements from online model-agency data. Second, we annotate images with linguistic shape attributes using crowd-sourcing. We learn mappings between body shape, measurements, and attributes, enabling us to supervise a regressor using any combination of these. To evaluate SHAPY, we introduce a new shape estimation benchmark, the "Human Bodies in the Wild" (HBW) dataset. HBW has images of people in natural clothing and natural settings together with ground-truth 3D shape from a body scanner. HBW is more challenging than existing shape benchmarks like SSP-3D, and SHAPY significantly outperforms existing methods on this benchmark. We believe this work will open new directions, since the idea of leveraging linguistic annotations to improve 3D shape has many applications.
294
+
295
+ Limitations: Our model-agency training dataset (Sec. 3.2) is not representative of the entire human population and this limits SHAPY's ability to predict larger body shapes. To address this, we need to find images of more diverse bodies together with anthropometric measurements and linguistic shape attributes describing them.
296
+
297
+ Social impact: Knowing the 3D shape of a person has advantages, for example, in the clothing industry to avoid unnecessary returns. If used without consent, 3D shape estimation may invade individuals' privacy. As with all other 3D pose and shape estimation methods, surveillance and deep-fake creation is another important risk. Consequently, SHAPY's license prohibits such uses.
298
+
299
+ Acknowledgments: This work was supported by the Max Planck ETH Center for Learning Systems and the International Max Planck Research School for Intelligent Systems. We thank Tsvetelina Alexiadis, Galina Henz, Claudia Gallatz, and Taylor McConnell for the data collection, and Markus Höschle for the camera setup. We thank Muhammed Kocabas, Nikos Athanasiou and Maria Alejandra Quiros-Ramirez for the insightful discussions.
300
+
301
+ Disclosure: https://files.is.tue.mpg.de/black/CoI_CVPR_2022.txt
302
+
303
+ # References
304
+
305
+ [1] Ankur Agarwal and Bill Triggs. Recovering 3D human pose from monocular images. Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 28(1):44-58, 2006. 3
306
+ [2] Brett Allen, Brian Curless, and Zoran Popovic. The space of human body shapes: Reconstruction and parameterization from range scans. Transactions on Graphics (TOG), 22(3):587-594, 2003. 3
307
+ [3] Dragomir Anguelov, Praveen Srinivasan, Daphne Koller, Sebastian Thrun, Jim Rodgers, and James Davis. SCAPE: Shape completion and animation of people. Transactions on Graphics (TOG), 24(3):408-416, 2005. 3
308
+ [4] Alexandru Balan and Michael J. Black. The naked truth: Estimating body shape under clothing. In European Conference on Computer Vision (ECCV), volume 5304, pages 15-29, 2008. 3
309
+ [5] Alexandru O. Balan, Leonid Sigal, Michael J. Black, James E. Davis, and Horst W. Haussecker. Detailed human shape and pose from images. In Computer Vision and Pattern Recognition (CVPR), pages 1-8, 2007. 3
310
+ [6] Didier Bieler, Semih Gunel, Pascal Fua, and Helge Rhodin. Gravity as a reference for estimating a person's height from video. In International Conference on Computer Vision (ICCV), pages 8568-8576, 2019. 4
311
+ [7] Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter Gehler, Javier Romero, and Michael J. Black. Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image. In European Conference on Computer Vision (ECCV), volume 9909, pages 561-578, 2016. 1, 3
312
+ [8] Zhe Cao, Gines Hidalgo Martinez, Tomas Simon, Shih-En Wei, and Yaser Sheikh. OpenPose: Realtime multiperson 2D pose estimation using part affinity fields. Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 43(1):172–186, 2019. 3
313
+ [9] Vasileios Choutas, Georgios Pavlakos, Timo Bolkart, Dimitrios Tzionas, and Michael J. Black. Monocular expressive body regression through body-driven attention. In European Conference on Computer Vision (ECCV), volume 12355, pages 20-40, 2020. 3, 6, 7, 8
314
+ [10] Antonio Criminisi, Ian Reid, and Andrew Zisserman. Single view metrology. International Journal of Computer Vision (IJCV), 40(2):123-148, 2000. 4
315
+ [11] Ratan Dey, Madhurya Nangia, Keith W. Ross, and Yong Liu. Estimating heights from photo collections: A data-driven approach. In Conference on Online Social Networks (COSN), page 227-238, 2014. 4
316
+ [12] Sai Kumar Dwivedi, Nikos Athanasiou, Muhammed Kocabas, and Michael J. Black. Learning to regress bodies from images using differentiable semantic rendering. In International Conference on Computer Vision (ICCV), pages 11250-11259, 2021. 3
317
+ [13] Yao Feng, Vasileios Choutas, Timo Bolkart, Dimitrios Tzionas, and Michael J. Black. Collaborative regression of expressive bodies using moderation. In International Conference on 3D Vision (3DV), pages 792-804, 2021. 6
318
+ [14] Georgios Georgakis, Ren Li, Srikrishna Karanam, Terrence Chen, Jana Košecka, and Ziyan Wu. Hierarchical kinematic
319
+
320
+ human mesh recovery. In European Conference on Computer Vision (ECCV), volume 12362, pages 768-784, 2020. 3
321
+ [15] Ke Gong, Yiming Gao, Xiaodan Liang, Xiaohui Shen, Meng Wang, and Liang Lin. Graphonomy: Universal human parsing via graph transfer learning. In Computer Vision and Pattern Recognition (CVPR), pages 7450-7459, 2019. 3
322
+ [16] Peng Guan, Alexander Weiss, Alexandru Balan, and Michael J. Black. Estimating human shape and pose from a single image. In International Conference on Computer Vision (ICCV), pages 1381-1388, 2009. 3
323
+ [17] Semih Gunel, Helge Rhodin, and Pascal Fua. What face and body shapes can tell us about height. In International Conference on Computer Vision Workshops (ICCVw), pages 1819-1827, 2019. 4
324
+ [18] Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross B. Girshick. Mask R-CNN. Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 42(2):386-397, 2020. 3
325
+ [19] Matthew Hill, Stephan Streuber, Carina Hahn, Michael Black, and Alice O'Toole. Exploring the relationship between body shapes and descriptions by linking similarity spaces. Journal of Vision (JOV), 15(12):931-931, 2015. 4
326
+ [20] David T. Hoffmann, Dimitrios Tzionas, Michael J. Black, and Siyu Tang. Learning to train with synthetic humans. In German Conference on Pattern Recognition (GCPR), pages 609-623, 2019. 3
327
+ [21] Wei-Lin Hsiao and Kristen Grauman. ViBE: Dressing for diverse body shapes. In Computer Vision and Pattern Recognition (CVPR), pages 11056-11066, 2020. 3
328
+ [22] Yinghao Huang, Federica Bogo, Christoph Lassner, Angjoo Kanazawa, Peter V. Gehler, Javier Romero, Ijaz Akhter, and Michael J. Black. Towards accurate marker-less human shape and pose estimation over time. In International Conference on 3D Vision (3DV), pages 421-430, 2017. 3
329
+ [23] Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3.6M: Large scale datasets and predictive methods for 3D human sensing in natural environments. Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 36(7):1325-1339, 2013. 3, 6
330
+ [24] Yasamin Jafarian and Hyun Soo Park. Learning high fidelity depths of dressed humans by watching social media dance videos. In Computer Vision and Pattern Recognition (CVPR), pages 12753-12762, 2021. 3
331
+ [25] Wen Jiang, Nikos Kolotouros, Georgios Pavlakos, Xiaowei Zhou, and Kostas Daniilidis. Coherent reconstruction of multiple humans from a single image. In Computer Vision and Pattern Recognition (CVPR), pages 5578-5587, 2020. 3
332
+ [26] Hanbyul Joo, Tomas Simon, and Yaser Sheikh. Total capture: A 3D deformation model for tracking faces, hands, and bodies. In Computer Vision and Pattern Recognition (CVPR), pages 8320-8329, 2018. 1, 3
333
+ [27] Angjoo Kanazawa, Michael J. Black, David W. Jacobs, and Jitendra Malik. End-to-end recovery of human shape and pose. In Computer Vision and Pattern Recognition (CVPR), pages 7122-7131, 2018. 3, 8
334
+
335
+ [28] Angjoo Kanazawa, Jason Y. Zhang, Panna Felsen, and Jitendra Malik. Learning 3D human dynamics from video. In Computer Vision and Pattern Recognition (CVPR), pages 5614-5623, 2019. 1
336
+ [29] Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. In Computer Vision and Pattern Recognition (CVPR), pages 1725–1732, 2014. 6
337
+ [30] Muhammed Kocabas, Nikos Athanasiou, and Michael J. Black. VIBE: Video inference for human body pose and shape estimation. In Computer Vision and Pattern Recognition (CVPR), pages 5252-5262, 2020. 1, 3
338
+ [31] Muhammed Kocabas, Chun-Hao P. Huang, Otmar Hilliges, and Michael J. Black. PARE: Part attention regressor for 3D human body estimation. In International Conference on Computer Vision (ICCV), pages 11127-11137, 2021. 1
339
+ [32] Muhammed Kocabas, Chun-Hao P. Huang, Joachim Tesch, Lea Müller, Otmar Hilliges, and Michael J. Black. SPEC: Seeing people in the wild with an estimated camera. In International Conference on Computer Vision (ICCV), pages 11035-11045, 2021. 1
340
+ [33] Nikos Kolotouros, Georgios Pavlakos, Michael J. Black, and Kostas Daniilidis. Learning to reconstruct 3D human pose and shape via model-fitting in the loop. In International Conference on Computer Vision (ICCV), pages 2252-2261, 2019. 1, 3, 6, 7, 8
341
+ [34] Christoph Lassner, Javier Romero, Martin Kiefel, Federica Bogo, Michael J. Black, and Peter V. Gehler. Unite the people: Closing the loop between 3D and 2D human representations. In Computer Vision and Pattern Recognition (CVPR), pages 6050-6059, 2017. 3
342
+ [35] Junbang Liang and Ming C. Lin. Shape-aware human pose and shape reconstruction using multi-view images. In International Conference on Computer Vision (ICCV), pages 4351-4361, 2019. 2, 3
343
+ [36] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dálár, and C Lawrence Zitnick. Microsoft COCO: common objects in context. In European Conference on Computer Vision (ECCV), volume 8693, pages 740-755, 2014. 3
344
+ [37] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. SMPL: A skinned multi-person linear model. Transactions on Graphics (TOG), 34(6):248:1-248:16, 2015. 2, 3
345
+ [38] Meysam Madadi, Hugo Bertiche, and Sergio Escalera. SMPLR: Deep learning based SMPL reverse for 3D human pose and shape recovery. Pattern Recognition (PR), 106:107472, 2020. 7
346
+ [39] Naureen Mahmood, Nima Ghorbani, Nikolaus F. Troje, Gerard Pons-Moll, and Michael J. Black. AMASS: Archive of motion capture as surface shapes. In International Conference on Computer Vision (ICCV), pages 5442-5451, 2019. 3
347
+ [40] Lea Müller, Ahmed A. A. Osman, Siyu Tang, Chun-Hao P. Huang, and Michael J. Black. On self-contact and human pose. In Computer Vision and Pattern Recognition (CVPR), pages 9990-9999, 2021. 3, 7, 8
348
+
349
+ [41] Mohamed Omran, Christoph Lassner, Gerard Pons-Moll, Peter V. Gehler, and Bernt Schiele. Neural body fitting: Unifying deep learning and model based human pose and shape estimation. In International Conference on 3D Vision (3DV), pages 484-494, 2018. 3
350
+ [42] Priyanka Patel, Chun-Hao Paul Huang, Joachim Tesch, David Hoffmann, Shashank Tripathi, and Michael J. Black. AGORA: Avatars in geography optimized for regression analysis. In Computer Vision and Pattern Recognition (CVPR), pages 13468-13478, 2021. 3
351
+ [43] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed A. A. Osman, Dimitrios Tzionas, and Michael J. Black. Expressive body capture: 3D hands, face, and body from a single image. In Computer Vision and Pattern Recognition (CVPR), pages 10975-10985, 2019. 1, 3, 4
352
+ [44] Georgios Pavlakos, Luyang Zhu, Xiaowei Zhou, and Kostas Daniilidis. Learning to estimate 3D human pose and shape from a single color image. In Computer Vision and Pattern Recognition (CVPR), pages 459–468, 2018. 3
353
+ [45] Gerard Pons-Moll, Javier Romero, Naureen Mahmood, and Michael J. Black. Dyna: A model of dynamic human shape in motion. Transactions on Graphics (TOG), 34(4):120:1-120:14, 2015. 6
354
+ [46] Sergi Pujades, Betty Mohler, Anne Thaler, Joachim Tesch, Naureen Mahmood, Nikolas Hesse, Heinrich H Bülthoff, and Michael J. Black. The virtual caliper: Rapid creation of metrically accurate avatars from 3D measurements. Transactions on Visualization and Computer Graphics (TVCG), 25(5):1887-1897, 2019. 3, 4, 5
355
+ [47] Kathleen M. Robinette, Sherri Blackwell, Hein Daanen, Mark Boehmer, Scott Fleming, Tina Brill, David Hoeferlin, and Dennis Burnsides. Civilian American and European Surface Anthropometry Resource (CAESAR) final report. Technical Report AFRL-HE-WP-TR-2002-0169, US Air Force Research Laboratory, 2002. 2, 4, 5
356
+ [48] Nadine Rueegg, Christoph Lassner, Michael J. Black, and Konrad Schindler. Chained representation cycling: Learning to estimate 3D human pose and shape by cycling between representations. In Conference on Artificial Intelligence (AAAI), pages 5561-5569, 2020. 3
357
+ [49] Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Hao Li, and Angjoo Kanazawa. PIFu: Pixel-aligned implicit function for high-resolution clothed human digitization. In International Conference on Computer Vision (ICCV), pages 2304-2314, 2019. 3
358
+ [50] Shunsuke Saito, Tomas Simon, Jason Saragih, and Hanbyul Joo. PIFuHD: Multi-level pixel-aligned implicit function for high-resolution 3D human digitization. In Computer Vision and Pattern Recognition (CVPR), pages 81-90, 2020. 3
359
+ [51] Akash Sengupta, Ignas Budvytis, and Roberto Cipolla. Synthetic training for accurate 3D human pose and shape estimation in the wild. In British Machine Vision Conference (BMVC), 2020. 2, 3, 6, 7, 8
360
+ [52] Akash Sengupta, Ignas Budvytis, and Roberto Cipolla. Hierarchical kinematic probability distributions for 3D human
361
+
362
+ shape and pose estimation from images in the wild. In International Conference on Computer Vision (ICCV), pages 11219-11229, 2021. 1, 2, 3, 7, 8
363
+ [53] Akash Sengupta, Ignas Budvytis, and Roberto Cipolla. Probabilistic 3D human shape and pose estimation from multiple unconstrained images in the wild. In Computer Vision and Pattern Recognition (CVPR), pages 16094–16104, 2021. 3
364
+ [54] Hyewon Seo, Frederic Cordier, and Nadia Magnenat-Thalmann. Synthesizing animatable body models with parameterized shape modifications. In Symposium on Computer Animation (SCA), pages 120-125, 2003. 3
365
+ [55] Hyewon Seo and Nadia Magnenat-Thalmann. An automatic modeling of human bodies from sizing parameters. In Symposium on Interactive 3D Graphics (SI3D), pages 19-26, 2003. 3
366
+ [56] Leonid Sigal, Alexandru Balan, and Michael J Black. HumanEva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion. International Journal of Computer Vision (IJCV), 87(1):4-27, 2010. 3
367
+ [57] Stephan Streuber, M. Alejandra Quiros-Ramirez, Matthew Q. Hill, Carina A. Hahn, Silvia Zuffi, Alice O'Toole, and Michael J. Black. Body Talk: Crowdshaping realistic 3D avatars with words. Transactions on Graphics (TOG), 35(4):54:1-54:14, 2016. 2, 4, 5
368
+ [58] Aggeliki Tsoli, Matthew Loper, and Michael J. Black. Model-based anthropometry: Predicting measurements from 3D human scans in multiple poses. In Winter Conference on Applications of Computer Vision (WACV), pages 83-90, 2014. 3
369
+ [59] Gul Varol, Duygu Ceylan, Bryan Russell, Jimei Yang, Ersin Yumer, Ivan Laptev, and Cordelia Schmid. BodyNet: Volumetric inference of 3D human body shapes. In European Conference on Computer Vision (ECCV), volume 11211, pages 20-38, 2018. 3
370
+ [60] Gül Varol, Javier Romero, Xavier Martin, Naureen Mahmood, Michael J. Black, Ivan Laptev, and Cordelia Schmid. Learning from synthetic humans. In Computer Vision and Pattern Recognition (CVPR), pages 4627-4635, 2017. 3
371
+
372
+ [61] Timo von Marcard, Roberto Henschel, Michael Black, Bodo Rosenhahn, and Gerard Pons-Moll. Recovering accurate 3D human pose in the wild using IMUs and a moving camera. In European Conference on Computer Vision (ECCV), volume 11214, pages 614-631, 2018. 3, 6
373
+ [62] Andrew Weitz, Lina Colucci, Sidney Primas, and Brinnae Bent. InfiniteForm: A synthetic, minimal bias dataset for fitness applications. arXiv:2110.01330, 2021. 3
374
+ [63] Stefanie Wuhrer and Chang Shu. Estimating 3D human shapes from measurements. Machine Vision and Applications (MVA), 24(6):1133-1147, 2013. 5
375
+ [64] Yuliang Xiu, Jinlong Yang, Dimitrios Tzionas, and Michael J. Black. ICON: Implicit Clothed humans Obtained from Normals. In Computer Vision and Pattern Recognition (CVPR), 2022. 3
376
+ [65] Hongyi Xu, Eduard Gabriel Bazavan, Andrei Zanfir, William T. Freeman, Rahul Sukthankar, and Cristian Sminchisescu. GHUM & GHUML: Generative 3D human shape and articulated pose models. In Computer Vision and Pattern Recognition (CVPR), pages 6183-6192, 2020. 1, 3
377
+ [66] Andrei Zanfir, Eduard Gabriel Bazavan, Hongyi Xu, William T Freeman, Rahul Sukthankar, and Cristian Sminchisescu. Weakly supervised 3D human pose and shape reconstruction with normalizing flows. In European Conference on Computer Vision (ECCV), volume 12351, pages 465-481, 2020. 3
378
+ [67] Hongwen Zhang, Yating Tian, Xinchi Zhou, Wanli Ouyang, Yebin Liu, Limin Wang, and Zhenan Sun. PyMAF: 3D human pose and shape regression with pyramidal mesh alignment feedback loop. In International Conference on Computer Vision (ICCV), pages 11446-11456, 2021. 1
379
+ [68] Rui Zhu, Xingyi Yang, Yannick Hold-Geoffroy, Federico Perazzi, Jonathan Eisenmann, Kalyan Sunkavalli, and Manmohan Chandraker. Single view metrology in the wild. In European Conference on Computer Vision (ECCV), volume 12356, pages 316–333, 2020. 4
accurate3dbodyshaperegressionusingmetricandsemanticattributes/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b0b011e4446124556f49664eee4c1a36796d59de91f3dfb176a4a1b0ebb5880
3
+ size 501872
accurate3dbodyshaperegressionusingmetricandsemanticattributes/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f356971b0d53e052a46bd0fc32556c068dc36fadb48334b9e1130082dc7b506
3
+ size 420230
acplanticurriculumpseudolabellingforsemisupervisedmedicalimageclassification/d6176429-3843-42b4-b235-8f7c679bc7da_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ad85c081c6873a08ba6a29fe13bcca2eaca617e8446a48e1d4e6d07733b7049
3
+ size 78656
acplanticurriculumpseudolabellingforsemisupervisedmedicalimageclassification/d6176429-3843-42b4-b235-8f7c679bc7da_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d76865fb4cae3f9629ccd483da4a2cf39d300d85c8e8ff7aa76c7b5f619aa45
3
+ size 94529
acplanticurriculumpseudolabellingforsemisupervisedmedicalimageclassification/d6176429-3843-42b4-b235-8f7c679bc7da_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b144a7e91825068138606a15224c80f5e87f25eecc26c407f700f5473692d70f
3
+ size 2244408
acplanticurriculumpseudolabellingforsemisupervisedmedicalimageclassification/full.md ADDED
@@ -0,0 +1,275 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ACPL: Anti-curriculum Pseudo-labelling for Semi-supervised Medical Image Classification
2
+
3
+ Fengbei Liu $^{1*}$ Yu Tian $^{1*}$ Yuanhong Chen $^{1}$ Yuyuan Liu $^{1}$ Vasileios Belagiannis $^{2}$ Gustavo Carneiro $^{1}$
4
+
5
+ $^{1}$ Australian Institute for Machine Learning, University of Adelaide
6
+ $^{2}$ Universität Ulm, Germany
7
+
8
+ # Abstract
9
+
10
+ Effective semi-supervised learning (SSL) in medical image analysis (MIA) must address two challenges: 1) work effectively on both multi-class (e.g., lesion classification) and multi-label (e.g., multiple-disease diagnosis) problems, and 2) handle imbalanced learning (because of the high variance in disease prevalence). One strategy to explore in SSL MIA is based on the pseudo labelling strategy, but it has a few shortcomings. Pseudo-labelling has in general lower accuracy than consistency learning, it is not specifically designed for both multi-class and multi-label problems, and it can be challenged by imbalanced learning. In this paper, unlike traditional methods that select confident pseudo label by threshold, we propose a new SSL algorithm, called anti-curriculum pseudo-labelling (ACPL), which introduces novel techniques to select informative unlabelled samples, improving training balance and allowing the model to work for both multi-label and multi-class problems, and to estimate pseudo labels by an accurate ensemble of classifiers (improving pseudo label accuracy). We run extensive experiments to evaluate ACPL on two public medical image classification benchmarks: Chest X-Ray14 for thorax disease multi-label classification and ISIC2018 for skin lesion multi-class classification. Our method outperforms previous SOTA SSL methods on both datasets $^{12}$ .
11
+
12
+ # 1. Introduction
13
+
14
+ Deep learning has shown outstanding results in medical image analysis (MIA) [24, 34, 35]. Compared to computer vision, the labelling of MIA training sets by medical experts is significantly more expensive, resulting in low availability of labelled images, but the high availability of unlabelled
15
+
16
+ ![](images/9013c576f3285bc6e1b38f62b263dd6633947e54fd2561a59e126e15f68726d7.jpg)
17
+ (a) Diagram of our ACPL (top) and traditional pseudo-label SSL (bottom)
18
+
19
+ ![](images/0767c8e2fc388b322be54381b1ab0bb3fc19e3f3f6f71c1a2990be6d2070d94d.jpg)
20
+ (b) Imbalanced distribution on multi-label Chest X-ray14 [39] (left) and multi-class ISIC2018 [36] (right)
21
+
22
+ ![](images/7eff3aaa09a7728090044eb4940d2cf57946d72b6663d69774a8885cca74639f.jpg)
23
+ Figure 1. In (a), we show diagrams of the proposed ACPL (top) and the traditional pseudo-label SSL (bottom) methods, and (b) displays histograms of images per label for the multi-label Chest X-ray14 [39] (left) and multi-class ISIC2018 [36] (right).
24
+
25
+ images from clinics and hospitals databases can be explored in the modelling of deep learning classifiers. Furthermore, differently from computer vision problems that tend to be mostly multi-class and balanced, MIA has a number of multi-class (e.g., a lesion image of a single class) and multi-label (e.g., an image from a patient can contain multiple diseases) problems, where both problems usually contain severe class imbalances because of the variable prevalence of diseases (see Fig. 1-(b)). Hence, MIA semi-supervised learning (SSL) methods need to be flexible enough to work with multi-label and multi-class problems, in addition to handle imbalanced learning.
26
+
27
+ State-of-the-art (SOTA) SSL approaches are usually based on the consistency learning of unlabelled data [5, 6, 32] and self-supervised pre-training [25]. Even though consistency-based methods show SOTA results on multiclass SSL problems, pseudo-labelling methods have shown
28
+
29
+ better results for multi-label SSL problems [29]. Pseudolabelling methods provide labels to confidently classified unlabelled samples that are used to re-train the model [22]. One issue with pseudo-labelling SSL methods is that the confidently classified unlabelled samples represent the least informative ones [30] that, for imbalanced problems, are likely to belong to the majority classes. Hence, this will bias the classification toward the majority classes and most likely deteriorate the classification accuracy of the minority classes. Also, selecting confident pseudo-labelled samples is challenging in multi-class, but even more so in multi-label problems. Previous papers [2, 29] use a fixed threshold for all classes, but a class-wise threshold that addresses imbalanced learning and correlations between classes in multi-label problems would enable more accurate pseudo-label predictions. However, such class-wise threshold is hard to estimate without knowing the class distributions or if we are dealing with a multi-class or multi-label problem. Furthermore, using the model output for the pseudo-labelling process can also cause confirmation bias [1], whereby the assignment of incorrect pseudo-labels will increase the model confidence in those incorrect predictions, and consequently decrease the model accuracy.
30
+
31
+ In this paper, we propose the anti-curriculum pseudo-labelling (ACPL), which addresses multi-class and multi-label imbalanced learning SSL MIA problems. First, we introduce a new approach to select the most informative unlabelled images to be pseudo-labelled. This is motivated by our argument that there exists a distribution shift between unlabelled and labelled samples for SSL. An effective learning curriculum must focus on informative unlabelled samples that are located as far as possible from the distribution of labelled samples. As a result, these informative samples are likely to belong to the minority classes in MIA imbalanced learning problems. Selecting these informative samples will naturally balance the training process and, given that they are selected before the pseudolabelling process, we eliminate the need for estimating a class-wise classification threshold, facilitating our model to work well on multi-class and multi-label problems. The information content measure of an unlabelled sample is computed with our proposed cross-distribution sample informativeness that outputs how close an unlabelled sample is from the set of labelled anchor samples (anchor samples are highly informative labelled samples). Second, we introduce a new pseudo-labelling mechanism, called informative mixup, which combines the model classification with a K-nearest neighbor (KNN) classification guided by sample informativeness to improve prediction accuracy and mitigate confirmation bias. Third, we propose the anchor set purification method that selects the most informative pseudo-labelled samples to be included in the labelled anchor set to improve the pseudo-labelling accuracy of the KNN classi
32
+
33
+ fer in later training stages.
34
+
35
+ To summarise, our ACPL approach selects highly informative samples for pseudo-labelling (addressing MIA imbalanced classification problems and allowing multi-label multi-class modelling) and uses an ensemble of classifiers to produce accurate pseudo labels (tackling confirmation bias to improve classification accuracy), where the main technical contributions are:
36
+
37
+ - A novel information content measure to select informative unlabelled samples named cross-distribution sample informativeness;
38
+ - A new pseudo-labelling mechanism, called informative mixup, which generates pseudo labels from an ensemble of deep learning and KNN classifiers; and
39
+ - A novel method, called anchor set purification (ASP), to select informative pseudo-labelled samples to be included in the labelled anchor set to improve the pseudo-labelling accuracy of the KNN classifier.
40
+
41
+ We evaluate ACPL on two publicly available medical image classification datasets, namely the Chest X-Ray14 for thorax disease multi-label classification [39] and the ISIC2018 for skin lesion multi-class classification [8, 36]. Our method outperforms the current SOTA methods in both datasets.
42
+
43
+ # 2. Related Work
44
+
45
+ We first review consistency-based and pseudo-labelling SSL methods. Then, we discuss the curriculum and anticurriculum learning literature for fully and semi-supervised learning and present relevant SSL MIA methods.
46
+
47
+ Consistency-based SSL optimises the classification prediction of labelled images and minimises the prediction outputs of different views of unlabelled images, where these views are obtained from different types of image perturbations, such as spatial/temporal [21, 33], adversarial [27], or data augmentation [5, 6, 32]. The performance of the consistency-based methods can be further improved with self-supervised pre-training [25]. Even though consistency-based SSL methods show SOTA results in many benchmarks [32], they depend on a careful design of perturbation functions that requires domain knowledge and would need to be adapted to each new type of medical imaging. Furthermore, Rizve et al. [29] show that pseudo-labelling SSL methods are more accurate for multi-label problems.
48
+
49
+ Pseudo-labelling SSL methods [7, 29, 31, 41] train a model with the available labelled data, estimate the pseudo labels of unlabelled samples classified with high confidence [22], then take these pseudo-labelled samples to retrain the model. As mentioned above in Sec. 1 pseudo-label SSL approaches can bias classification toward the majority classes in imbalanced problems, is not seamlessly adaptable to multi-class and multi-label problems, and can also
50
+
51
+ lead to confirmation bias. We argue that the improvement of pseudo-labelling SSL methods depends on the selection of informative unlabelled samples to address the majority class bias and the adaptation to multi-class and multi-label problems, and an accurate pseudo-labelling mechanism to handle confirmation bias, which are two points that we target with this paper.
52
+
53
+ The selection of training samples based on their information content has been studied by fully supervised curriculum and anti-curriculum learning methods [40]. Curriculum learning focuses on the easy samples in the early training stages and gradually includes the hard samples in the later training stages, where easy samples [4, 15, 20] are usually defined as samples that have small losses during training, and hard samples tend to have large losses. On the other hand, anti-curriculum focuses on the hard samples first and transitions to the easy samples later in the training [14, 17]. The methods above have been designed to work in fully supervised learning. Cascante et al. [7] explored a pseudo labelling SSL method based on curriculum learning, but we are not aware of SSL methods that explore anti-curriculum learning. Since we target accurate SSL of imbalanced multi-class and multi-label methods, we follow anti-curriculum learning that pseudo-labels the most informative samples which are likely to belong to the minority classes (consequently, helping to balance the training) and enable the selection of samples without requiring the estimation of a class-wise classification threshold (enabling a seamless adaptation to multi-class and multi-label problems).
54
+
55
+ The main benchmarks for SSL in MIA study the multi-label classification of chest X-ray (CXR) images [13, 39] and multi-class classification of skin lesions [8, 36]. For CXR SSL classification, pseudo-labelling methods have been explored [2], but SOTA results are achieved with consistency learning approaches [9, 23, 25, 26, 37]. For skin lesion SSL classification, the current SOTA is also based on consistency learning [26], with pseudo-labelling approaches [3] not being competitive. We show that our proposed pseudo-labelling method ACPL can surpass the consistency-based SOTA on both benchmarks, demonstrating the value of selecting highly informative samples for pseudo labelling and of the accurate pseudo labels from the ensemble of classifiers. We also show that our ACPL improves the current computer vision SOTA [29] applied to MIA, demonstrating the limitation of computer vision methods in MIA and also the potential of our approach to be applied in more general SSL problems.
56
+
57
+ # 3. Methods
58
+
59
+ To introduce our SSL method ACPL, assume that we have a small labelled training set $\mathcal{D}_L = \{(\mathbf{x}_i,\mathbf{y}_i)\}_{i = 1}^{|\mathcal{D}_L|}$ where $\mathbf{x}_i\in \mathcal{X}\subset \mathbb{R}^{H\times W\times C}$ is the input image of size
60
+
61
+ Algorithm 1 Anti-curriculum Pseudo-labelling Algorithm
62
+ 1: require: Labelled set $\mathcal{D}_L$ , unlabelled set $\mathcal{D}_U$ , and number of training stages $T$
63
+ 2: initialise $\mathcal{D}_A = \mathcal{D}_L$ , and $t = 0$
64
+ 3: warm-up train $p_{\theta_t}(\mathbf{x})$ with $\theta_t = \arg \min_\theta \frac{1}{|\mathcal{D}_L|} \sum_{(\mathbf{x}_i, \mathbf{y}_i) \in \mathcal{D}_L} \ell(\mathbf{y}_i, p_{\theta}(\mathbf{x}_i))$
65
+ 4: while $t < T$ or $|\mathcal{D}_U| \neq 0$ do
66
+ 5: build pseudo-labelled dataset using CDSI from (2) and IM from (6): $\mathcal{D}_S = \{(x, \tilde{y}) | x \in \mathcal{D}_U, h(f_{\theta_t}(x), \mathcal{D}_A) = 1, \tilde{y} = g(f_{\theta_t}(x), \mathcal{D}_A)\}$
67
+ 6: update anchor set with ASP from (7): $\mathcal{D}_A = \mathcal{D}_A \bigcup (\mathbf{x}, \tilde{\mathbf{y}})$ , where $(x, \tilde{y}) \in \mathcal{D}_S$ , and $a(f_{\theta_t}(x), \mathcal{D}_U, \mathcal{D}_A) = 1$
68
+ 7: $t \gets t + 1$
69
+ 8: optimise (1) using $\mathcal{D}_L, \mathcal{D}_S$ to obtain $p_{\theta_t}(\mathbf{x})$
70
+ 9: update labelled and unlabelled sets: $\mathcal{D}_L \gets \mathcal{D}_L \bigcup \mathcal{D}_S, \mathcal{D}_U \gets \mathcal{D}_U \setminus \mathcal{D}_S$
71
+ 10: end while
72
+ 11: return $p_{\theta_t}(\mathbf{x})$
73
+
74
+ $H\times W$ with $C$ colour channels, and $\mathbf{y}_i\in \{0,1\}^{|\mathcal{V}|}$ is the label with the set of classes denoted by $\mathcal{V} = \{1,\dots,|\mathcal{V}|\}$ (note that $\mathbf{y}_i$ is a one-hot vector for multi-class problems and a binary vector in multi-label problems). A large unlabelled training set $\mathcal{D}_U = \{\mathbf{x}_i\}_{i = 1}^{|D_U|}$ is also provided, with $|\mathcal{D}_L| < < |\mathcal{D}_U|$ . We assume the samples from both datasets are drawn from the same (latent) distribution. Our algorithm also relies on the pseudo-labelled set $\mathcal{D}_S$ that is composed of pseudo-labelled samples classified as informative unlabelled samples, and an anchor set $\mathcal{D}_A$ that contains informative pseudo-labelled samples. The goal of ACPL is to learn a model $p_{\theta}:X\to [0,1]^{|Y|}$ parameterised by $\theta$ using the labelled, unlabelled, pseudo-labelled, and anchor datasets.
75
+
76
+ Below, in Sec. 3.1, we introduce our ACPL optimisation that produces accurate pseudo labels to unlabelled samples following an anti-curriculum strategy, where highly informative unlabelled samples are selected to be pseudolabelled at each training stage. In Sec. 3.2, we present the information criterion of an unlabelled sample, referred to as cross distribution sample informativeness (CDSI), based on the dissimilarity between the unlabelled sample and samples in the anchor set $\mathcal{D}_A$ . The pseudo labels for the informative unlabelled samples are generated using the proposed informative mixup (IM) method (Sec. 3.3) that mixes up the results from the model $p_{\theta}(.)$ and a $K$ nearest neighbor (KNN) classifier using the anchor set. At the end of each training stage, the anchor set is updated with the anchor set purification (ASP) method (Sec. 3.4) that only keeps
77
+
78
+ ![](images/71982ed51349b567b75d53935f6e6090c6b04a98e830f3fd9f18d6d7ac3b0abc.jpg)
79
+ Figure 2. Anti-curriculum pseudo-labelling (ACPL) algorithm. The algorithm is divided into the following iterative steps: 1) train the model with $\mathcal{D}_S$ and $\mathcal{D}_L$ ; 2) extract the features from the anchor and unlabelled samples; 3) estimate information content of unlabelled samples with CDSI from (4) with anchor set $\mathcal{D}_A$ ; 4) partition the unlabelled samples into high, medium and low information content using (2); 5) assign a pseudo label to high information content unlabelled samples with IM from (6); 6) update $\mathcal{D}_S$ with new pseudo-labelled samples; and 7) update $\mathcal{D}_A$ with ASP in (7).
80
+
81
+ the most informative subset of pseudo-labelled samples, according to the CDSI criterion.
82
+
83
+ # 3.1. ACPL Optimisation
84
+
85
+ Our ACPL optimisation, described in Alg. 1 and depicted by Fig. 2, starts with a warm-up supervised training of the parameters of the model $p_{\theta}(.)$ using only the labelled set $\mathcal{D}_L$ . For the rest of the training, we use the sets of labelled and unlabelled samples, $\mathcal{D}_L$ and $\mathcal{D}_U$ , and update the pseudo-labelled set $\mathcal{D}_S$ and the anchor set $\mathcal{D}_A$ containing the informative unlabelled and pseudo-labelled samples, where $\mathcal{D}_S$ start as an empty set and $\mathcal{D}_A$ starts with the samples in $\mathcal{D}_L$ . The optimisation iteratively minimises the following cost function:
86
+
87
+ $$
88
+ \begin{array}{l} \ell_ {A C P L} (\theta , \mathcal {D} _ {L}, \mathcal {D} _ {S}) = \frac {1}{| \mathcal {D} _ {L} |} \sum_ {\left(\mathbf {x} _ {i}, \mathbf {y} _ {i}\right) \in \mathcal {D} _ {L}} \ell \left(\mathbf {y} _ {i}, p _ {\theta} \left(\mathbf {x} _ {i}\right)\right) \tag {1} \\ + \frac {1}{| \mathcal {D} _ {S} |} \sum_ {(\mathbf {x} _ {i}, \tilde {\mathbf {y}} _ {i}) \in \mathcal {D} _ {S}} \ell (\tilde {\mathbf {y}} _ {i}, p _ {\theta} (\mathbf {x} _ {i})), \\ \end{array}
89
+ $$
90
+
91
+ where $\ell(.)$ denotes a classification loss (e.g., cross-entropy), $\theta$ is the model parameter, $\mathbf{y}_i$ is the ground truth, and $\tilde{\mathbf{y}}_i$ is the estimated pseudo label. After optimising (1), the labelled and unlabelled sets are updated as $\mathcal{D}_L = \mathcal{D}_L \cup \mathcal{D}_S$ and $\mathcal{D}_U = \mathcal{D}_U \setminus \mathcal{D}_S$ , and a new iteration of optimisation takes place.
92
+
93
+ # 3.2. Cross Distribution Sample Informativeness (CDSI)
94
+
95
+ The function that estimates if an unlabelled sample has high information content is defined by
96
+
97
+ $$
98
+ h \left(f _ {\theta} (\mathbf {x}), \mathcal {D} _ {A}\right) = \left\{ \begin{array}{l l} 1, & p _ {\gamma} (\zeta = \operatorname {h i g h} | \mathbf {x}, \mathcal {D} _ {A}) > \tau , \\ 0, & \text {o t h e r w i s e}, \end{array} \right. \tag {2}
99
+ $$
100
+
101
+ where $\zeta \in \mathcal{Z} = \{\mathrm{low, medium, high}\}$ represents the information content random variable, $\gamma = \{\mu_{\zeta}, \Sigma_{\zeta}, \pi_{\zeta}\}_{\zeta \in \mathcal{Z}}$ denotes the parameters of the Gaussian Mixture Model (GMM) $p_{\gamma}(.)$ , and $\tau = \max \{p_{\gamma}(\zeta = \mathrm{low} | \mathbf{x}, \mathcal{D}_A), p_{\gamma}(\zeta = \mathrm{medium} | \mathbf{x}, \mathcal{D}_A)\}$ . The function $p_{\gamma}(\zeta | \mathbf{x}, \mathcal{D}_A)$ can be decomposed into $p_{\gamma}(\mathbf{x} | \zeta, \mathcal{D}_A) p_{\gamma}(\zeta | \mathcal{D}_A) / p_{\gamma}(\mathbf{x} | \mathcal{D}_A)$ , where
102
+
103
+ $$
104
+ p _ {\gamma} (\mathbf {x} | \zeta , \mathcal {D} _ {A}) = n \left(d \left(f _ {\theta} (\mathbf {x}), \mathcal {D} _ {A}\right) \mid \mu_ {\zeta}, \Sigma_ {\zeta}\right), \tag {3}
105
+ $$
106
+
107
+ with $n(:, \mu_{\zeta}, \Sigma_{\zeta})$ denoting a Gaussian function with mean $\mu_{\zeta}$ and covariance $\Sigma_{\zeta}$ , $p_{\gamma}(\zeta | \mathcal{D}_A) = \pi_{\zeta}$ representing the ownership probability of $\zeta$ (i.e., the weight of mixture $\zeta$ ), and $p_{\gamma}(\mathbf{x} | \mathcal{D}_A)$ being a normalisation factor. The probability in (3) is computed with the density of the unlabelled sample $\mathbf{x}$ with respect to the anchor set $\mathcal{D}_A$ , as follows:
108
+
109
+ $$
110
+ d \left(f _ {\theta} (\mathbf {x}), \mathcal {D} _ {A}\right) = \frac {1}{K} \sum_ {\substack {\left(f _ {\theta} \left(\mathbf {x} _ {A}\right), \mathbf {y} _ {A}\right) \in \\ \mathcal {N} \left(f _ {\theta} (\mathbf {x}), \mathcal {D} _ {A}\right)}} \frac {f _ {\theta} (\mathbf {x}) ^ {\top} f _ {\theta} \left(\mathbf {x} _ {A}\right)}{\| f _ {\theta} (\mathbf {x}) \| _ {2} \| f _ {\theta} \left(\mathbf {x} _ {A}\right) \| _ {2}}, \tag{4}
111
+ $$
112
+
113
+ where $\mathcal{N}(f_{\theta}(\mathbf{x}),\mathcal{D}_A)$ represents the set of K-nearest neighbors (KNN) from the anchor set $\mathcal{D}_A$ to the input image feature $f_{\theta}(\mathbf{x})$ , with each element in the set $\mathcal{D}_A$ denoted by $(f_{\theta}(\mathbf{x}_A),\mathbf{y}_A)$ . The $F$ -dimensional input image feature is extracted with $f_{\theta}:\mathcal{X}\to \mathbb{R}^{F}$ from the model $p_{\theta}(.)$ with $p_{\theta}(\mathbf{x}) = \sigma (f_{\theta}(\mathbf{x}))$ , where $\sigma (.)$ is the final activation function to produce an output in $[0,1]^{|\mathcal{V}|}$ . The parameters $\gamma$ in (2) are estimated with the expectation-maximisation (EM) algorithm [10], every time after the anchor set is updated.
114
+
115
+ # 3.3. Informative Mixup (IM)
116
+
117
+ After selecting informative unlabelled samples with (2), we aim to produce reliable pseudo labels for them. We can provide two pseudo labels for each unlabelled sample $\mathbf{x} \in \mathcal{D}_U$ : the model prediction from $p_\theta(\mathbf{x})$ , and the K-nearest neighbor (KNN) prediction using the anchor set, as follows:
118
+
119
+ $$
120
+ \tilde {\mathbf {y}} _ {\text {m o d e l}} (\mathbf {x}) = p _ {\theta} (\mathbf {x}),
121
+ $$
122
+
123
+ $$
124
+ \tilde {\mathbf {y}} _ {\mathrm {K N N}} (\mathbf {x}) = \frac {1}{K} \sum_ {\left(f _ {\theta} \left(\mathbf {x} _ {A}\right), \mathbf {y} _ {A}\right) \in \mathcal {N} \left(f _ {\theta} (\mathbf {x}), \mathcal {D} _ {A}\right)} \mathbf {y} _ {A}. \tag {5}
125
+ $$
126
+
127
+ $\mathbf{y}_A$ is the label of anchor set samples. However, using any of the pseudo labels from (5) can be problematic for model training. The pseudo label in $\tilde{\mathbf{y}}_{\mathrm{model}}(\mathbf{x})$ can cause confirmation bias, and the reliability of $\tilde{\mathbf{y}}_{\mathrm{KNN}}(\mathbf{x})$ depends on the size and representativeness of the initial labelled set to produce accurate classification. Inspired by MixUp [42], we propose the informative mixup method that constructs the pseudo-labelling function $g(.)$ in (1) with a linear combination of $\tilde{\mathbf{y}}_{\mathrm{model}}(\mathbf{x})$ and $\tilde{\mathbf{y}}_{\mathrm{KNN}}(\mathbf{x})$ weighted by the density
128
+
129
+ ![](images/ea44149b3894f52ff2c886b5f86406d869be2f9aef8988a7cbe2f9f125300bd6.jpg)
130
+ Figure 3. ASP: 1) find KNN samples from an informative unlabelled sample to the anchor set $\mathcal{D}_A$ ; 2) find KNN samples from each anchor sample of (1) to the unlabelled set $\mathcal{D}_U$ ; and 3) calculate the number of surviving nearest neighbours. Samples with the smallest values of $c(.)$ are selected to be inserted into $\mathcal{D}_A$ .
131
+
132
+ score from (4), as follows:
133
+
134
+ $$
135
+ \begin{array}{l} \tilde {\mathbf {y}} = g \left(f _ {\theta} (\mathbf {x}), \mathcal {D} _ {A}\right) = d \left(f _ {\theta} (\mathbf {x}), \mathcal {D} _ {A}\right) \times \tilde {\mathbf {y}} _ {\text {m o d e l}} (\mathbf {x}) \\ + \left(1 - d \left(f _ {\theta} (\mathbf {x}), \mathcal {D} _ {A}\right)\right) \times \tilde {\mathbf {y}} _ {\mathrm {K N N}} (\mathbf {x}). \tag {6} \\ \end{array}
136
+ $$
137
+
138
+ The informative mixup in (6) is different from MixUp [42] because it combines the classification results of the same image from two models instead of the classification from the same model of two images. Furthermore, our informative mixup weights the classifiers with the density score to reflect the trade-off between $\tilde{\mathbf{y}}_{\mathrm{model}}(\mathbf{x})$ and $\tilde{\mathbf{y}}_{\mathrm{KNN}}(\mathbf{x})$ . Since informative samples are selected from a region of the anchor set with low feature density, the KNN prediction $\tilde{\mathbf{y}}_{\mathrm{KNN}}(\mathbf{x})$ is less reliable than $\tilde{\mathbf{y}}_{\mathrm{model}}(\mathbf{x})$ , so by default, we should trust more the model classification. The weighting between the two predictions in (6) reflects this observation, where $\tilde{\mathbf{y}}_{\mathrm{model}}(\mathbf{x})$ will tend have a larger weight given that $d(f_{\theta}(\mathbf{x}),\mathcal{D}_A)$ is usually larger than 0.5, as displayed in Fig. 2 (see the informativeness score histogram at the bottom-right corner). When the sample is located in a high-density region, we place most of the weight on the model prediction given that in such case, the model is highly reliable. On the other hand, when the sample is in a low-density region, we try to balance a bit more the contribution of both the model and KNN predictions, given the low reliability of the model.
139
+
140
+ # 3.4. Anchor Set Purification (ASP)
141
+
142
+ After estimating the pseudo label for informative unlabelled samples, we aim to update the anchor set with informative pseudo-labelled samples to maintain density score from (4) accurate in later training stages. However, adding all pseudo-labelled samples will cause anchor set over-sized and increase hyper-parameter sensitivity. Thus, we propose
143
+
144
+ the Anchor Set Purification (ASP) module to select the least connected pseudo-labelled samples to be inserted in the anchor set, as in (see Fig. 3):
145
+
146
+ $$
147
+ a \left(f _ {\theta} (\mathbf {x}), \mathcal {D} _ {U}, \mathcal {D} _ {A}\right) = \left\{ \begin{array}{l l} 1, & c \left(f _ {\theta} (\mathbf {x}), \mathcal {D} _ {U}, \mathcal {D} _ {A}\right) \leq \alpha , \\ 0, & \text {o t h e r w i s e}, \end{array} \right. \tag {7}
148
+ $$
149
+
150
+ where the pseudo-labelled samples with $a(f_{\theta}(\mathbf{x}), \mathcal{D}_U, \mathcal{D}_A) = 1$ and $\tilde{\mathbf{y}} = g(f_{\theta}(\mathbf{x}), \mathcal{D}_A)$ from (6) are inserted into the anchor set. The information content $c(f_{\theta}(\mathbf{x}), \mathcal{D}_U, \mathcal{D}_A)$ of a pseudo-labelled sample $f_{\theta}(\mathbf{x})$ in (7) is computed in three steps (see Fig. 3): 1) find the KNN samples $\mathcal{N}(f_{\theta}(\mathbf{x}), \mathcal{D}_A)$ from $f_{\theta}(\mathbf{x})$ to the anchor set $\mathcal{D}_A$ ; 2) for each of the $K$ elements $(\mathbf{x}_A, \mathbf{y}_A) \in \mathcal{N}(f_{\theta}(\mathbf{x}), \mathcal{D}_A)$ , find the KNN set $\mathcal{N}(f_{\theta}(\mathbf{x}_A), \mathcal{D}_U)$ from $f_{\theta}(\mathbf{x}_A)$ to the unlabelled set $\mathcal{D}_U$ ; and 3) $c(f_{\theta}(\mathbf{x}), \mathcal{D}_U, \mathcal{D}_A)$ is calculated to be the number of times that the pseudo-labelled sample $\mathbf{x}$ appears in the KNN sets $\mathcal{N}(f_{\theta}(\mathbf{x}_A), \mathcal{D}_U)$ for the $K$ elements of set $\mathcal{N}(f_{\theta}(\mathbf{x}), \mathcal{D}_A)$ . The threshold $\alpha$ in (7) is computed with $\alpha = \min_{\mathbf{x} \in \mathcal{D}_S} c(f_{\theta}(\mathbf{x}), \mathcal{D}_U, \mathcal{D}_A)$ .
151
+
152
+ # 4. Experiments
153
+
154
+ For the experiments below, we use the Chest X-Ray14 [39] and ISIC2018 [8, 36] datasets.
155
+
156
+ Chest X-Ray14 contains 112,120 CXR images from 30,805 different patients. There are 14 labels (each label is a disease) and No Finding class, where each patient can have multiple labels, forming a multi-label classification problem. To compare with previous papers [2, 26], we adopt the official train/test data split [39]. We report the classification result on the test set (26K samples) using area under the ROC curve (AUC), and the learning process uses training sets containing different proportions of the labelled data in $\{2\%, 5\%, 10\%, 15\%, 20\}$ .
157
+
158
+ ISIC2018 is a skin lesion dataset that contains 10,015 images with seven labels. Each image is associated with one of the labels, forming a multi-class classification problem. We follow the train/test split from [26] for fair comparison, where the training set contains $20\%$ of labelled samples and $80\%$ of unlabelled samples. We report the AUC, Sensitivity, and F1 score results.
159
+
160
+ # 4.1. Implementation Details
161
+
162
+ For both datasets, we use DenseNet-121 [12] as our backbone model. For Chest X-Ray14, the dataset preprocessing consists of resizing the images to $512 \times 512$ for faster processing. For the optimisation, we use Adam optimizer [19], batch size 16 and learning rate 0.05. During training, we use data augmentation based on random crop and resize, and random horizontal flip. We first train 20 epochs on the initial labelled subset to warm-up the model for feature extraction. Then we train for 50 epochs, where in every 10 epochs we update the anchor set with ASP from
163
+
164
+ Table 1. Mean AUC testing set results over the 14 disease classes of Chest X-Ray14 for different labelled set training percentages. * indicates the methods that use DenseNet-169 as backbone architecture. Bold number means the best result per label percentage and underline shows previous best results.
165
+
166
+ <table><tr><td>Method Type</td><td>Label Percentage</td><td>2%</td><td>5%</td><td>10%</td><td>15%</td><td>20%</td></tr><tr><td rowspan="3">Consistency based</td><td>SRC-MT* [26]</td><td>66.95</td><td>72.29</td><td>75.28</td><td>77.76</td><td>79.23</td></tr><tr><td>NoTeacher [37]</td><td>72.60</td><td>77.04</td><td>77.61</td><td>N/A</td><td>79.49</td></tr><tr><td>S2MTS2 [25]</td><td>74.69</td><td>78.96</td><td>79.90</td><td>80.31</td><td>81.06</td></tr><tr><td rowspan="3">Pseudo Label</td><td>Graph XNet* [2]</td><td>53.00</td><td>58.00</td><td>63.00</td><td>68.00</td><td>78.00</td></tr><tr><td>UPS [29]</td><td>65.51</td><td>73.18</td><td>76.84</td><td>78.90</td><td>79.92</td></tr><tr><td>Ours</td><td>74.82</td><td>79.20</td><td>80.40</td><td>81.06</td><td>81.77</td></tr></table>
167
+
168
+ Table 2. Class-level AUC testing set results comparison between our approach and other semi-supervised SOTA approaches trained with $20\%$ of labelled data on Chest Xray-14. * denotes the models use DenseNet-169 as backbone. Bold number means the best result per class and underlined shows second best results.
169
+
170
+ <table><tr><td>Method Type</td><td>Supervised</td><td colspan="3">Consistency based</td><td colspan="3">Pseudo-labelling</td></tr><tr><td>Method</td><td>Densenet-121</td><td>MT [33]*</td><td>SRC-MT [26]*</td><td>S2MTS2[25]</td><td>GraphXNet [2]</td><td>UPS [29]</td><td>Ours</td></tr><tr><td>Atelectasis</td><td>75.75</td><td>75.12</td><td>75.38</td><td>78.57</td><td>71.89</td><td>77.09</td><td>79.53</td></tr><tr><td>Cardiomegaly</td><td>80.71</td><td>87.37</td><td>87.7</td><td>88.08</td><td>87.99</td><td>85.73</td><td>89.03</td></tr><tr><td>Effusion</td><td>79.87</td><td>80.81</td><td>81.58</td><td>82.87</td><td>79.2</td><td>81.35</td><td>83.56</td></tr><tr><td>Infiltration</td><td>69.16</td><td>70.67</td><td>70.4</td><td>70.68</td><td>72.05</td><td>70.82</td><td>71.40</td></tr><tr><td>Mass</td><td>78.40</td><td>77.72</td><td>78.03</td><td>82.57</td><td>80.9</td><td>81.82</td><td>82.49</td></tr><tr><td>Nodule</td><td>74.49</td><td>73.27</td><td>73.64</td><td>76.60</td><td>71.13</td><td>76.34</td><td>77.73</td></tr><tr><td>Pneumonia</td><td>69.55</td><td>69.17</td><td>69.27</td><td>72.25</td><td>76.64</td><td>70.96</td><td>73.86</td></tr><tr><td>Pneumothorax</td><td>84.70</td><td>85.63</td><td>86.12</td><td>86.55</td><td>83.7</td><td>85.86</td><td>86.95</td></tr><tr><td>Consolidation</td><td>71.85</td><td>72.51</td><td>73.11</td><td>75.47</td><td>73.36</td><td>74.35</td><td>75.50</td></tr><tr><td>Edema</td><td>81.61</td><td>82.72</td><td>82.94</td><td>84.83</td><td>80.2</td><td>83.56</td><td>84.95</td></tr><tr><td>Emphysema</td><td>89.75</td><td>88.16</td><td>88.98</td><td>91.88</td><td>84.07</td><td>91.00</td><td>93.36</td></tr><tr><td>Fibrosis</td><td>79.30</td><td>78.24</td><td>79.22</td><td>81.73</td><td>80.34</td><td>80.87</td><td>81.86</td></tr><tr><td>Pleural Thicken</td><td>73.46</td><td>74.43</td><td>75.63</td><td>76.86</td><td>75.7</td><td>75.55</td><td>77.60</td></tr><tr><td>Hernia</td><td>86.05</td><td>87.74</td><td>87.27</td><td>85.98</td><td>87.22</td><td>85.62</td><td>85.89</td></tr><tr><td>Mean</td><td>78.19</td><td>78.83</td><td>79.23</td><td>81.06</td><td>78.00</td><td>79.92</td><td>81.77</td></tr></table>
171
+
172
+ Sec. 3.4. For the KNN classifier in (2), we set K to be 200 for $2\%$ and $5\%$ (of labelled data) and 50 for remaining label proportions. These values are set based on validation results, but our approach is robust to a large range K values - we show an ablation study that compares the performance of our method for different values of K. For ISIC2018, we resize the image to $224 \times 224$ for fair comparison with baselines. For the optimisation, we use Adam optimizer [19], batch size 32 and learning rate 0.001. During training, data augmentation is also based on random crop and resize, and random horizontal flip. We warm-up the model for 40 epochs and then we train for 100 epochs, where in every 20 epochs, we update the anchor set with ASP. For the KNN classifier, K is set to 100 based on validation set. The code is written in Pytorch [28] and we use two RTX 2080ti Gpus for all experiments. KNN computation takes 5 sec for Chest X-ray14 unlabelled samples with Faisss [16] library for faster processing. We follow [25, 26, 33] to maintain an exponential moving average (EMA) version of the trained model, which is only used for evaluation not for training.
173
+
174
+ # 4.2. Thorax Disease Classification Result
175
+
176
+ For the results on Chest X-Ray14 in Table 1, our method, NoTeacher [37], UPS [29], and $\mathbf{S}^2\mathbf{M}\mathbf{T}\mathbf{S}^2$ [25] use the DenseNet-121 backbone, while SRC-MT [26] and GraphXNet [2] use DenseNet-169 [12]. SRC-MT [26] is a consistency-based SSL; NoTeacher [37] extends MT by replacing the EMA process with two networks combined with a probabilistic graph model; $\mathbf{S}^2\mathbf{M}\mathbf{T}\mathbf{S}^2$ [25] combines self-supervised pre-training with MT fine-tuning; and GraphXNet [2] constructs a graph from dataset samples and assigns pseudo labels to unlabeled samples through label propagation; and UPS [29] applies probability and uncertainty thresholds to enable the pseudo labelling of unlabeled samples. All methods use the official test set [39]. Our approach achieves the SOTA results for all percentages of training labels. Compared to the pseudo-labelling approaches UPS and GraphXNet, our approach outperforms them by a margin between $3\%$ to $20\%$ . Compared to the consistency-based approaches SRC-MT and NoTeacher, our method consistently achieves $2\%$ improvement for all cases, even though we use a backbone archi
177
+
178
+ Table 3. AUC, Sensitivity and F1 testing results on ISIC2018, where $20\%$ of the training set is labelled. Bold shows the best result per measure, and underline shows second best results.
179
+
180
+ <table><tr><td>Method</td><td>AUC</td><td>Sensitivity</td><td>F1</td></tr><tr><td>Supervised</td><td>90.15</td><td>65.50</td><td>52.03</td></tr><tr><td>SS-DCGAN [11]</td><td>91.28</td><td>67.72</td><td>54.10</td></tr><tr><td>TCSE [23]</td><td>92.24</td><td>68.17</td><td>58.44</td></tr><tr><td>TE [21]</td><td>92.70</td><td>69.81</td><td>59.33</td></tr><tr><td>MT [33]</td><td>92.96</td><td>69.75</td><td>59.10</td></tr><tr><td>SRC-MT [26]</td><td>93.58</td><td>71.47</td><td>60.68</td></tr><tr><td>Self-training [3]</td><td>90.58</td><td>67.63</td><td>54.51</td></tr><tr><td>Ours</td><td>94.36</td><td>72.14</td><td>62.23</td></tr></table>
181
+
182
+ tecture of lower capacity (i.e., DenseNet-121 instead of DenseNet-169). Compared with the previous SOTA, our method outperforms $\mathrm{S}^2\mathrm{MTS}^2$ [25] by $1\%$ AUC in all cases, which is remarkable because our method is initialised with an ImageNet pre-trained model instead of an expensive self-supervised pre-training approach.
183
+
184
+ The class-level performances using $20\%$ of the labelled data of SSL methods are shown in Table 2, which demonstrates that our method achieves the best result in 10 out of the 14 classes. Our method surpasses the previous pseudolabelling method GraphXNet by $3.7\%$ and threshold based pseudo-labelling method [29] by $1.8\%$ . Our method also outperforms consistency-based methods MT [33] and SRC-MT [26] by more than $2\%$ . For method $\mathrm{S}^2\mathrm{MTS}^2$ [25] with self-supervised learning, our method can outperform it using an ImageNet pre-trained model, alleviating the need of a computationally expensive self-supervised pre-training.
185
+
186
+ # 4.3. Skin Lesion Classification Result
187
+
188
+ We show the results on ISIC2018 in Table 3, where competing methods are based on self-training [3], generative adversarial network (GAN) to augment the labelled set [11], temporal ensembling [21], MT [33] and its extension [26], and a DenseNet-121 [12] baseline trained with $20\%$ of the training set. Compared with consistency-based approaches [23,26,33], our method improves between $0.7\%$ and $3\%$ in AUC and around $1\%$ in F1 score. Our method also outperforms previous self-training approach [3] by a large margin in all measures.
189
+
190
+ # 4.4. Ablation Study
191
+
192
+ For the ablation study, we test each of our three contributions and visualize the data distribution of selected subset with high and low informative samples on the Chest X-Ray14 [39] with $2\%$ labelled training set, where for CDSI and ASP, we run each experiment three times and show the mean and standard deviation of the AUC results.
193
+
194
+ Cross-distribution Sample Informativeness (CDSI). We first study in Table 4 how performance is affected by
195
+
196
+ ![](images/4dbf6d0e698df45072ec343ec61dfb2f2fe73af8b8f5ed1f2e378072f479941a.jpg)
197
+ Figure 4. (Left) Mean AUC testing results for different values for K in the KNN (for CDSI in (4) and pseudo-labelling in (5)), where the green region uses ASP and blue region does not use ASP. (Right) Mean size of $\mathcal{D}_L$ at every training stage when adding unlabelled samples of high, medium and low information content according to (2). Model is trained on Chest X-Ray14, where 2% of the training is labelled.
198
+
199
+ ![](images/c811897f05e983ba7915b5f6ee01fd419e9b8b406226ff5b5e459d8f6349d9aa.jpg)
200
+
201
+ Table 4. Ablation study on Chest X-ray14 (2% labelled). Starting with a baseline classifier (DenseNet-121), we test the selection of unlabelled samples (to be provided with a pseudo-label) with different information content, according to (2) (i.e., low, medium, high), and the use of the anchor set purification (ASP) module.
202
+
203
+ <table><tr><td>Information Content</td><td>ASP</td><td>AUC ± std</td></tr><tr><td colspan="2">Baseline</td><td>65.84 ± 0.14</td></tr><tr><td rowspan="2">Low</td><td>X</td><td>67.18 ± 2.40</td></tr><tr><td>✓</td><td>67.76 ± 1.05</td></tr><tr><td rowspan="2">Medium</td><td>X</td><td>70.83 ± 1.49</td></tr><tr><td>✓</td><td>71.16 ± 0.51</td></tr><tr><td rowspan="2">High</td><td>X</td><td>73.81 ± 0.75</td></tr><tr><td>✓</td><td>74.44 ± 0.38</td></tr></table>
204
+
205
+ pseudo-labelling unlabelled samples with different degrees of informativeness (low, medium and high) using our CDSI. Starting from the baseline classifier DenseNet-121 that reaches an AUC of $65\%$ , we observe that pseudolabelling low-information content unlabelled samples yields the worst result (around $67\%$ AUC) and selecting high-information content unlabelled samples produces the best result (around $73\%$ AUC). Figure 4 (right) plots how the size of the labelled set $\mathcal{D}_L$ during training depends on the degree of informativeness of the unlabelled samples to be pseudo-labelled. These results show that: 1) unlabelled samples of high-information content enables the construction of a smaller labelled set (compared with unlabelled samples of low- or medium-information content), allowing a more efficient training process that produces a more accurate KNN classifier; and 2) the standard deviation of the results in Table 4 are smaller when selecting the unlabelled samples of high-information content, compared with the low- or medium-information content. This second point can be explained by the class imbalance issue in Chest X-Ray14, where the selection of low-information content samples will enable the training of majority classes, possibly producing an ineffective training for the minority classes that can increase the variance in the results.
206
+
207
+ Table 5. AUC testing set results on Chest X-ray14 (2% labelled) for different pseudo labelling strategies ( $\alpha$ denotes the linear coefficient combining the model and KNN predictions).
208
+
209
+ <table><tr><td>Pseudo-label Strategies</td><td>Methods</td><td>AUC</td></tr><tr><td>Baseline</td><td>-</td><td>65.84</td></tr><tr><td rowspan="2">Single Prediction</td><td>Model prediction</td><td>72.63</td></tr><tr><td>KNN prediction</td><td>72.45</td></tr><tr><td rowspan="2">Mixup</td><td>random sampled α</td><td>73.23</td></tr><tr><td>MixUp [42]</td><td>69.28</td></tr><tr><td>Ours</td><td>Informative Mixup</td><td>74.44</td></tr></table>
210
+
211
+ Anchor Set Purification (ASP). Also in Table 4, we compare ASP with an alternative method that selects all pseudolabelled samples to be included into the anchor set for the low-, medium- and high-information content unlabelled samples. Results show that the ASP module improves AUC between $0.3\%$ and $1.0\%$ and reduces standard deviation between $0.4\%$ and $1.4\%$ . This demonstrates empirically that the ASP module enables the formation of a more informative anchor set that improves the pseudolabelling accuracy, and consequently the final AUC results. Furthermore, in Figure 4 (left), ASP is shown to stabilise the performance of the method with respect to $K \in \{50, 100, 150, 200, 250, 300\}$ for the KNN classifier of (4). In particular, with ASP, the difference between the best and worst AUC results is around $1\%$ , while without ASP, the difference grows to $2\%$ . This can be explained by the fact that without ASP, the anchor set grows quickly with relatively less informative pseudo-labelled samples, which reduces the stability of the method.
212
+
213
+ Informative Mixup (IM) In Table 5, we show that our proposed IM in (6) produces a more accurate pseudo-label, where we compare it with alternative pseudo-label methods, such as with only the model prediction, only the KNN prediction, random sample $\alpha$ from beta distribution to replace the density score in (6), and regular MixUp [42]. It is clear that the use of model or KNN predictions as pseudo labels does not work well most likely because of confirmation bias (former case) or the inaccuracy of the KNN classifier (latter case). MixUp [42] does not show good accuracy either, as also observed in [38] and [18], when MixUp is performed in multi-label images or multiple single-object images. The random sampling of $\alpha$ for replacing density score shows a better result than MixUp, but the lack of an image-based weight to balance the two predictions, like in (6), damages performance. Our proposed IM shows a result that is at least $1.5\%$ better than any of the other pseudo-labelling approaches, showing the importance of using the density of the unlabelled sample in the anchor set to weight the contribution of the model and KNN classifiers.
214
+
215
+ The imbalanced learning mitigation is studied in Figure 5, which shows the histogram of label distribution in percentage (for a subset of four disease minority classes and the No
216
+
217
+ ![](images/3c79b476774b837f8ff5686f204e8fb9e3874f9a62316c28d2b3d807e29d1de1.jpg)
218
+ Figure 5. The selection of highly informative unlabelled samples (blue) promote a more balanced learning process, where the difference in the number of samples belonging to the minority or majority classes is smaller than if we selected unlabelled samples with low informativeness (yellow). Green shows the original data distribution]. Full 14-class distributions are shown in the supplementary material.
219
+
220
+ Finding majority class) by selecting unlabelled samples of high (blue) and low (yellow) information content. We also show the original label distribution in green for reference.
221
+
222
+ Notice that the selection of highly informative samples significantly increases the percentage of disease minority classes (from between $5\%$ and $10\%$ to almost $30\%$ ) and decreases the percentage of the No Finding majority class (from $60\%$ to $30\%$ ), creating a more balanced distribution of these five classes. This indicates that our informative sample selection can help to mitigate the issue of imbalanced learning. We include the full 14-classes histograms in the supplementary material.
223
+
224
+ # 5. Discussion and Conclusion
225
+
226
+ In this work, we introduced the anti-curriculum pseudolabelling (ACPL) SSL method. Unlike traditional pseudolabelling methods that use a threshold to select confidently classified samples, ACPL uses a new mechanism to select highly informative unlabelled samples for pseudo-labelling and an ensemble of classifiers to produce accurate pseudolabels. This enables ACPL to address MIA multi-class and multi-label imbalanced classification problems. We show in the experiments that ACPL outperforms previous consistency-based, pseudo-label based and self-supervised SSL methods in multi-label Chest X-ray14 and multi-class ISIC2018 benchmarks. We demonstrate in the ablation study the influence of each of our contributions and we also show how our new selection of informative samples addresses MIA imbalanced classification problems. For future work, it is conceivable that ACPL can be applied can be applied to more general computer vision problems, so we plan to test ACPL in traditional computer vision benchmarks. We would also explore semi-supervised classification with out-of-distribution (OOD) data in the initial labelled and unlabelled sets as our method currently assume all samples are in-distribution.
227
+
228
+ # References
229
+
230
+ [1] Eric Arazo, Diego Ortega, Paul Albert, Noel E O'Connor, and Kevin McGuinness. Pseudo-labeling and confirmation bias in deep semi-supervised learning. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1-8. IEEE, 2020. 2
231
+ [2] Angelica I Aviles-Rivero, Nicolas Papadakis, Ruoteng Li, Philip Sellars, Qingnan Fan, Robby T Tan, and Carola-Bibiane Schonlieb. Graphxnet chest x-ray classification under extreme minimal supervision. arXiv preprint arXiv:1907.10085, 2019. 2, 3, 5, 6
232
+ [3] Wenjia Bai, Ozan Oktay, Matthew Sinclair, Hideaki Suzuki, Martin Rajchl, Giacomo Tarroni, Ben Glocker, Andrew King, Paul M Matthews, and Daniel Rueckert. Semisupervised learning for network-based cardiac mr image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 253-260. Springer, 2017. 3, 7
233
+ [4] Yoshua Bengio, Jerome Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41-48, 2009. 3
234
+ [5] David Berthelot, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. Remix-match: Semi-supervised learning with distribution matching and augmentation anchoring. In International Conference on Learning Representations, 2019. 1, 2
235
+ [6] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. Advances in Neural Information Processing Systems, 32:5049-5059, 2019. 1, 2
236
+ [7] Paola Cascante-Bonilla et al. Curriculum labeling: Revisiting pseudo-labeling for semi-supervised learning. arXiv preprint arXiv:2001.06001, 2020. 2, 3
237
+ [8] Noel Codella, Veronica Rotemberg, Philipp Tschandl, M Emre Celebi, Stephen Dusza, David Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael Marchetti, et al. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic). arXiv preprint arXiv:1902.03368, 2019. 2, 3, 5
238
+ [9] Wenhui Cui, Yanlin Liu, Yuxing Li, Menghao Guo, Yiming Li, Xiuli Li, Tianle Wang, Xiangzhu Zeng, and Chuyang Ye. Semi-supervised brain lesion segmentation with an adapted mean teacher model. In International Conference on Information Processing in Medical Imaging, pages 554-565. Springer, 2019. 3
239
+ [10] Arthur P Dempster, Nan M Laird, and Donald B Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society: Series B (Methodological), 39(1):1-22, 1977. 4
240
+ [11] Andres Diaz-Pinto, Adrián Colomer, Valery Naranjo, Sandra Morales, Yanwu Xu, and Alejandro F Frangi. Retinal image synthesis and semi-supervised learning for glaucoma assessment. IEEE transactions on medical imaging, 38(9):2211-2218, 2019. 7
241
+
242
+ [12] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700-4708, 2017. 5, 6, 7
243
+ [13] Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilicus, Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, et al. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 590-597, 2019. 3
244
+ [14] Angela H Jiang, Daniel L-K Wong, Giulio Zhou, David G Andersen, Jeffrey Dean, Gregory R Ganger, Gauri Joshi, Michael Kaminsky, Michael Kozuch, Zachary C Lipton, et al. Accelerating deep learning by focusing on the biggest losers. arXiv preprint arXiv:1910.00762, 2019. 3
245
+ [15] Lu Jiang, Deyu Meng, Qian Zhao, Shiguang Shan, and Alexander G Hauptmann. Self-paced curriculum learning. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015. 3
246
+ [16] Jeff Johnson et al. Billion-scale similarity search with gpus. arXiv preprint arXiv:1702.08734, 2017. 6
247
+ [17] Kenji Kawaguchi and Haihao Lu. Ordered sgd: A new stochastic optimization framework for empirical risk minimization. In International Conference on Artificial Intelligence and Statistics, pages 669-679. PMLR, 2020. 3
248
+ [18] Jang-Hyun Kim, Wonho Choo, Hosan Jeong, and Hyun Oh Song. Co-mixup: Saliency guided joint mixup with supermodular diversity. arXiv preprint arXiv:2102.03065, 2021. 8
249
+ [19] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 5, 6
250
+ [20] M Kumar, Benjamin Packer, and Daphne Koller. Self-paced learning for latent variable models. Advances in neural information processing systems, 23:1189-1197, 2010. 3
251
+ [21] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016. 2, 7
252
+ [22] Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. 2013. 2
253
+ [23] Xiaomeng Li, Lequan Yu, Hao Chen, Chi-Wing Fu, and Pheng-Ann Heng. Semi-supervised skin lesion segmentation via transformation consistent self-ensembling model. arXiv preprint arXiv:1808.03887, 2018. 3, 7
254
+ [24] Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen Awm Van Der Laak, Bram Van Ginneken, and Clara I Sánchez. A survey on deep learning in medical image analysis. Medical image analysis, 42:60-88, 2017. 1
255
+ [25] Fengbei Liu, Yu Tian, Filipe R Cordeiro, Vasileios Belagiannis, Ian Reid, and Gustavo Carneiro. Self-supervised mean teacher for semi-supervised chest x-ray classification. arXiv preprint arXiv:2103.03629, 2021. 1, 2, 3, 6, 7
256
+
257
+ [26] Quande Liu et al. Semi-supervised medical image classification with relation-driven self-ensembling model. IEEE transactions on medical imaging, 39(11):3429-3440, 2020. 3, 5, 6, 7
258
+ [27] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979–1993, 2018. 2
259
+ [28] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc., 2019. 6
260
+ [29] Mamshad Nayeem Rizve et al. In defense of pseudolabeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning. In International Conference on Learning Representations, 2020. 2, 3, 6, 7
261
+ [30] Burr Settles. Active learning literature survey. 2009. 2
262
+ [31] Weiwei Shi, Yihong Gong, Chris Ding, Zhiheng MaXiaoyu Tao, and Nanning Zheng. Transductive semi-supervised deep learning using min-max features. In Proceedings of the European Conference on Computer Vision (ECCV), pages 299-315, 2018. 2
263
+ [32] Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Advances in Neural Information Processing Systems, 33, 2020. 1, 2
264
+ [33] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Advances in Neural Information Processing Systems, 30, 2017. 2, 6, 7
265
+ [34] Yu Tian, Gabriel Maicas, Leonardo Zorron Cheng Tao Pu, Rajvinder Singh, Johan W Verjans, and Gustavo Carneiro. Few-shot anomaly detection for polyp frames from colonoscopy. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 274-284. Springer, 2020. 1
266
+ [35] Yu Tian, Guansong Pang, Fengbei Liu, Seon Ho Shin, Johan W Verjans, Rajvinder Singh, Gustavo Carneiro, et al. Constrained contrastive distribution learning for unsupervised anomaly detection and localisation in medical images. MICCAI2021, 2021. 1
267
+ [36] Philipp Tschandl, Cliff Rosendahl, and Harald Kittler. The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific data, 5(1):1-9, 2018. 1, 2, 3, 5
268
+ [37] Balagopal Unnikrishnan, Cuong Manh Nguyen, Shafa Balaram, Chuan Sheng Foo, and Pavitra Krishnaswamy.
269
+
270
+ Semi-supervised classification of diagnostic radiographs with noteacher: A teacher that is not mean. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 624-634. Springer, 2020. 3, 6
271
+ [38] Qian Wang, Ning Jia, and Toby P Breckon. A baseline for multi-label image classification using an ensemble of deep convolutional neural networks. In 2019 IEEE International Conference on Image Processing (ICIP), pages 644-648. IEEE, 2019. 8
272
+ [39] Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M Summers. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2097-2106, 2017. 1, 2, 3, 5, 6, 7
273
+ [40] Xiaoxia Wu, Ethan Dyer, and Behnam Neyshabur. When do curricula work? In International Conference on Learning Representations, 2021. 3
274
+ [41] Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. Self-training with noisy student improves imagenet classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10687-10698, 2020. 2
275
+ [42] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017. 4, 5, 8
acplanticurriculumpseudolabellingforsemisupervisedmedicalimageclassification/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59cdeef901179f4d8a4285894bee732e522a74fdc9f03e3cb67fc7970970d7c9
3
+ size 407213
acplanticurriculumpseudolabellingforsemisupervisedmedicalimageclassification/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1200817c0a4ed2971c3ebd7af631b4acf6ee68f38fac69d81e9a983cce88e3e8
3
+ size 424530
acquiringadynamiclightfieldthroughasingleshotcodedimage/16a63fb6-6ee6-4340-b18b-b692eed45d81_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8d69280ce20ee131a678072256fc2f520fb6eb7a546ccdbeb7f5da12cd707b1
3
+ size 77710
acquiringadynamiclightfieldthroughasingleshotcodedimage/16a63fb6-6ee6-4340-b18b-b692eed45d81_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3568cef535b1d628040571bbc73c8bcd4b17ce1fc67b1677073c94436b2021c2
3
+ size 98025
acquiringadynamiclightfieldthroughasingleshotcodedimage/16a63fb6-6ee6-4340-b18b-b692eed45d81_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66f9159c5065947d84ddb44fc6af15aae7964b01c6ebfbf2bdc4c948d9982073
3
+ size 5270432
acquiringadynamiclightfieldthroughasingleshotcodedimage/full.md ADDED
@@ -0,0 +1,335 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Acquiring a Dynamic Light Field through a Single-Shot Coded Image
2
+
3
+ Ryoya Mizuno†, Keita Takahashi†, Michitaka Yoshida‡, Chihiro Tsutake†, Toshiaki Fujii†, Hajime Nagahara‡ †Nagoya University, Japan, ‡Osaka University, Japan
4
+
5
+ # Abstract
6
+
7
+ We propose a method for compressively acquiring a dynamic light field (a 5-D volume) through a single-shot coded image (a 2-D measurement). We designed an imaging model that synchronously applies aperture coding and pixel-wise exposure coding within a single exposure time. This coding scheme enables us to effectively embed the original information into a single observed image. The observed image is then fed to a convolutional neural network (CNN) for light-field reconstruction, which is jointly trained with the camera-side coding patterns. We also developed a hardware prototype to capture a real 3-D scene moving over time. We succeeded in acquiring a dynamic light field with $5 \times 5$ viewpoints over 4 temporal sub-frames (100 views in total) from a single observed image. Repeating capture and reconstruction processes over time, we can acquire a dynamic light field at $4 \times$ the frame rate of the camera. To our knowledge, our method is the first to achieve a finer temporal resolution than the camera itself in compressive light-field acquisition. Our software is available from our project webpage. $^{1}$
8
+
9
+ # 1. Introduction
10
+
11
+ A light field is represented as a set of multi-view images, where dozens of views are aligned on a 2-D grid with tiny viewpoint intervals. This representation contains rich visual information of a target scene and thus can be used for various applications such as 3-D display [14, 38], view synthesis [20,58], depth estimation [34,51], synthetic refocusing [13, 25], and object recognition [17, 45]. The scope of applications will further expand if the target scene is able to move over time. However, a light field varying over time, i.e., a dynamic light field, is challenging to acquire due to the huge data rate, which is proportional to both the number of views and frame rate.
12
+
13
+ Several approaches to acquire light fields have been investigated as summarized in Fig. 1. The most straightforward approach is to construct an array of cameras [5,37,49], which requires bulky and costly hardware. The second ap
14
+
15
+ ![](images/357ee0d242a4f1a21dc20c743d6abb142d56fc2892312dfb71c997117d8d1182.jpg)
16
+ Figure 1. Our achievement compared with representative previous works (camera array [49], lens-array camera [24], coded-aperture camera [12], and coded exposure camera [54]). Axes are in relative scales w.r.t. camera's spatial resolution and frame rate.
17
+
18
+ proach is to insert a micro-lens array in front of an image sensor [1, 2, 24, 25, 29, 46], which enables us to capture a light field in a single-shot image. However, the spatial resolution of each viewpoint image is sacrificed for the angular resolution (number of views). In the above two approaches, the frame rate of the acquired light field is at most equivalent to that of the cameras. Moreover, the data rate is not compressed because each light ray is sampled individually.
19
+
20
+ The third approach aims to acquire a light field compressively by using a single camera equipped with a coded mask or aperture [3, 6, 7, 12, 16, 18, 22, 23, 39, 41, 43]. This kind of camera was used to obtain a small number of coded images, from which a light field with the full-sensor spatial resolution can be reconstructed. For static scenes, taking more images with different coding patterns is beneficial to achieve higher reconstruction quality. However, for moving scenes, the use of multiple coded images involves additional complexities related to scene motions. Hajisharif et al. [8] used a high dimensional light-field dictionary that spanned several temporal frames. However, their dictionary-based light-field reconstruction required a prohibitively long computation time. Sakai et al. [31] handled scene motions by alternating two coding patterns over time and by training their CNN-based algorithm on dynamic scenes. However, the light field was reconstructed only for every two temporal
21
+
22
+ frames (at $0.5 \times$ the frame rate of the camera).
23
+
24
+ In this paper, we advance the compressive approach several steps further to innovate the imaging method for a dynamic light field. As shown in Fig. 1, our method pursues the full-sensor spatial resolution and a faster frame rate than the camera itself. To this end, we design an imaging model that synchronously applies aperture coding [12, 16, 23] and pixel-wise exposure coding [9, 30, 48, 54] within a single exposure time. This coding scheme enables us to effectively embed the original information (a 5-D volume of a dynamic light field) into a single coded image (a 2-D measurement). The coded image is then fed to a CNN for light-field reconstruction, which is jointly trained with the camera-side coding patterns. We also develop a hardware prototype to capture real 3-D scenes moving over time. As a result, we succeeded in acquiring the dynamic light field with $5 \times 5$ viewpoints over 4 temporal sub-frames (100 views in total) from a single coded image. Repeating capture and reconstruction processes over time, we acquired a dynamic light field at $4 \times$ the frame rate of the camera. To our knowledge, our method is the first to achieve a finer temporal resolution than the camera itself in compressive light-field acquisition.
25
+
26
+ # 2. Background
27
+
28
+ # 2.1. Computational Photography
29
+
30
+ In the literature of computational photography, aperture coding has been used to encode the viewpoint (angular) dimension of a light field [6, 12, 16, 23], while exposure coding has been adopted to encode fast temporal changes in a monocular video [9, 28, 30, 48, 54]. Our method combines them to encode both the viewpoint (angular) and temporal dimensions simultaneously. Our method is also considered as an extreme case of snapshot compressive imaging [44, 56, 57], where a higher dimensional (typically 3-D) data volume is compressed into a 2-D sensor measurement.
31
+
32
+ We noticed that Vargas et al. [42] recently proposed an imaging architecture similar to ours for compressive light field acquisition. However, their method was designed for static light fields. Accordingly, their image formation model implicitly assumed that the target light field should be invariant during an exposure time (during the period when the time-varying coding patterns were applied), which is theoretically incompatible with moving scenes. Moreover, they did not report hardware implementation for the pixel-wise exposure coding. In contrast, our method is designed to handle motions during each exposure time, and it is fully implemented as a hardware prototype.
33
+
34
+ We model the entire imaging pipeline (coded-image acquisition and light-field reconstruction) as a deep neural network, and jointly optimize the camera-side coding patterns and the reconstruction algorithm. This design aligns with the recent trend of deep optics [4,11,12,15,26,31,36,52,54]
35
+
36
+ where optical elements and computational algorithms are jointly optimized under the framework of deep learning. However, our method is designed to handle higher dimensional data (dynamic light fields) than the previous works.
37
+
38
+ # 2.2. Light-Field Reconstruction
39
+
40
+ Reconstruction of a light field from a coded/compressed measurement is considered as an inverse problem, for which several classes of methods can be used. Traditional methods [3, 18, 19] formulated this problem as energy minimization with rather simple explicitly-defined prior terms and solved them using iterative algorithms. These methods often result in insufficient reconstruction quality and long computation time. Recently, deep-learning-based methods [7, 12, 22, 41, 47, 53] have gained more popularity due to the excellent representation capability of data-driven implicit priors. Trained on a suitable dataset, these methods can acquire the capability of high-quality reconstruction. Moreover, reconstruction (inference) on a pre-trained network does not require much computation time. Hybrid approaches have also been investigated. Algorithm unrolling methods [6, 21] unroll procedures of iterative algorithms into trainable networks, whereas plug-and-play methods [56, 57] use pre-trained network models as building blocks of iterative algorithms.
41
+
42
+ We take a deep-learning-based approach and jointly optimize the entire process (coded-image acquisition and lightfield reconstruction) in the spirit of deep optics. For the reconstruction part, we use a rather plain network architecture to balance the reconstruction quality and the computational efficiency. Further improvement would be expected with more sophisticated and light-field specific network architectures [6, 53]. We leave this as future work, because the main focus of this paper is the design of the image acquisition process rather than the reconstruction network.
43
+
44
+ In recent years, view synthesis from a single image [10, 27, 33, 35, 40, 50] has attracted much attention. In principle, 3-D reconstruction/ rendering from an ordinary monocular image (without coding) is an ill-posed problem; the results are hallucinated by using the implicit scene priors learned from the training dataset rather than the physical cues. In contrast, our method aims to recover the 3D and motion information that is embedded into a single image through the camera-side coding process.
45
+
46
+ # 3. Proposed Method
47
+
48
+ # 3.1. Notations and Problem Formulation
49
+
50
+ A schematic diagram of the camera we assume is shown in Fig. 2. Each light ray coming into the camera is parameterized with five variables, $(u,v,x,y,t)$ , where $(u,v)$ and $(x,y)$ denote the intersections with the aperture and imaging planes, respectively, and $t$ denotes the time within a sin
51
+
52
+ ![](images/cc255d2a5ef66ded6b5db0cfd37ddd6d486e2980d62c8c286e88ffe4ca664e0f.jpg)
53
+ Figure 2. Example of dynamic light field (left) and schematic diagram of camera (right).
54
+
55
+ gle exposure time of the camera. We discretize the variable space into a 5-D integer grid, where the range of each variable is described as $S_{\xi} = [0,N_{\xi})$ ( $\xi \in \{x,y,u,v,t\}$ ). By using these variables, the intensity of a light ray is described as $L_{x,y}(u,v,t)^2$ . Since $(u,v)$ is associated with the viewpoint (angle), $L_{x,y}(u,v,t)$ is equivalent to a set of multiview videos, i.e., a dynamic light field.
56
+
57
+ Our aim is to acquire the latent dynamic light field $L_{x,y}(u,v,t)$ : a 5-D volume with $N_xN_yN_uN_vN_t$ unknowns, from a single coded image $I_{x,y}$ : a 2-D measurement with $N_{x}N_{y}$ observables. Hereafter, we assume $N_{u} = N_{v} = 5$ and $N_{t} = 4$ unless mentioned otherwise.
58
+
59
+ # 3.2. Image Acquisition Model
60
+
61
+ If the camera has no coding functionalities (in the case of an ordinary camera), the observed image is given by
62
+
63
+ $$
64
+ I _ {x, y} = \sum_ {(u, v, t) \in S _ {u} \times S _ {v} \times S _ {t}} L _ {x, y} (u, v, t). \tag {1}
65
+ $$
66
+
67
+ Each pixel value, $I_{x,y}$ , is the sum of light rays over the viewpoint $(u,v)$ and temporal $(t)$ dimensions. Therefore, the variation along $u,v,t$ dimensions is simply blurred out, making it difficult to recover.
68
+
69
+ Meanwhile, we design an imaging method that can effectively preserve the original 5-D information. We exploit the combination of aperture coding and pixel-wise exposure coding that are synchronously varied within a single exposure time. The observed image is given as
70
+
71
+ $$
72
+ I _ {x, y} = \sum_ {(u, v, t) \in S _ {u} \times S _ {v} \times S _ {t}} a (u, v, t) p _ {x, y} (t) L _ {x, y} (u, v, t). \tag {2}
73
+ $$
74
+
75
+ where $a(u,v,t)\in [0,1]$ (semi-transparency) and $p_{x,y}(t)\in$ $\{0,1\}$ (on/off) are coding patterns applied on the aperture and pixel planes, respectively. This imaging process can be regarded as two-step coding as follows. First, a series of aperture coding patterns, $a(u,v,t)$ , is applied to
76
+
77
+ ![](images/d4df4c0cd2cfc4e8bf529382e6881833048d15a8f6cdc8a283be459e0f4ffd86.jpg)
78
+ Figure 3. Coding patterns applied on aperture and pixel planes.
79
+
80
+ $L_{x,y}(u,v,t)$ over time, which reduces the original 5-D volume into a 3-D spatio-temporal tensor, $J_{x,y}(t)$ , as
81
+
82
+ $$
83
+ J _ {x, y} (t) = \sum_ {(u, v) \in S _ {u} \times S _ {v}} a (u, v, t) L _ {x, y} (u, v, t). \tag {3}
84
+ $$
85
+
86
+ Next, the 3-D tensor, $J_{x,y}(t)$ , is further reduced into a 2-D measurement, $I_{x,y}$ , through the pixel-wise exposure coding over time using $p_{x,y}(t)$ , as
87
+
88
+ $$
89
+ I _ {x, y} = \sum_ {t \in S _ {t}} p _ {x, y} (t) J _ {x, y} (t). \tag {4}
90
+ $$
91
+
92
+ By combining these two steps, we encode both the viewpoint $(u,v)$ and temporal $(t)$ dimensions and embed them into a single 2-D image.
93
+
94
+ An example of the coding patterns is shown in Fig. 3. As mentioned later, these patterns are directly linked with the parameters of a CNN (AcqNet), which is jointly trained with another CNN for light-field reconstruction (RecNet). Therefore, these coding patterns are optimized for the training dataset so as to preserve as much of the light-field information as possible in the observed image.
95
+
96
+ Figure 4 shows two images (close-ups of the same portion) obtained from a test scene through two imaging models: the ordinary camera (Eq. (1)) and ours (Eq. (2)). The ordinary camera obtains a simply blurred observation, while ours obtains a dappled image due to the coding patterns. To further analyze the effect of coding, we also used a primitive scene with a fronto-parallel plane (a primitive plane scene). As shown in Fig. 5, we prepared an image $G(x,y)$ with nine bright points as the texture for the plane. We then synthesized a dynamic light field using the parameters for the 2-D lateral velocity $(\alpha_{x},\alpha_{y})$ [pixels per unit time] and disparity $d$ [pixels per viewpoint] (corresponding to the depth) as
97
+
98
+ $$
99
+ L _ {x, y} (u, v, t) = G (x - d u - \alpha_ {x} t, y - d v - \alpha_ {y} t) \tag {5}
100
+ $$
101
+
102
+ from which we computed an observed image by using Eq. (2). Some resulting images obtained with different parameters are shown in Fig. 5 (the brightness is corrected for visualization). These images can be interpreted as point spreading functions (PSFs) for various motion and disparity values. Notably, these PSFs are distinct from each other. Moreover, even in a single image, the PSFs for the nine
103
+
104
+ ![](images/3ec1ba4ed2e5a6c26261399331e342de92df4e8cb596afcc939134f44356f973.jpg)
105
+
106
+ ![](images/5c91f53f4f97bd87ce9bce2207b5792c395b01d72ed160cf2b2c42430cd7a518.jpg)
107
+
108
+ ![](images/7934679940ae8e54e48a1a4f0acc2c805e23468a183bf4bddf1cb1a7a0b9fa9c.jpg)
109
+ Figure 4. Example images acquired by ordinary camera Eq. (1) (left) and our imaging model of Eq. (2) (right).
110
+ Figure 5. Our imaging model yields distinct PSFs for different motion and disparity values (coding patterns in Fig. 3 were used).
111
+
112
+ points differ from each other. These results show that both motions and disparities, which are associated with changes along the temporal $(t)$ and viewpoint $(u,v)$ dimensions, respectively, are encoded by the various shapes of PSFs depending on the spatial coordinate $(x,y)$ . The encoded information is not human readable, but can be deciphered by the RecNet that is jointly trained with the coding patterns.
113
+
114
+ # 3.3. Hardware Implementation
115
+
116
+ We developed a prototype camera shown in Fig. 6 that can apply aperture coding and pixel-wise exposure coding within a single exposure time.
117
+
118
+ We used a Nikon Rayfact (25 mm F1.4 SF2514MC) as the primary lens. The aperture coding was implemented using a liquid crystal on silicon (LCoS) display (Forth Dimension Displays, SXGA-3DM), which had $1280 \times 1024$ pixels. We divided the central area of the LCoS display into $5 \times 5$ regions, each with $150 \times 150$ pixels. Accordingly, the angular resolution of the light field was set to $5 \times 5$ . The pixel-wise exposure coding was implemented using a row-column-wise exposure sensor [54] that had $656 \times 512$ pixels. We synchronized the LCoS display with the image sensor via an external circuit, so that four sets of coding patterns were synchronously applied within a single exposure time. The timing chart is shown in Fig. 7. The time duration assigned for each coding pattern was set to $17~\mathrm{ms}$ .
119
+
120
+ ![](images/74dca1a33cd9d851e9a3b8d304997538952ac6e46b111934e127d3cc26b28ca6.jpg)
121
+ Figure 6. Our camera prototype (left) and optical diagram (right).
122
+
123
+ ![](images/18f942858ab4a1b8f8eb068a2d452cb538913a39cdae91bd5dabaa02a07b1138.jpg)
124
+
125
+ ![](images/88d8d6470342add100ea9ab99e402cd3b0a89e93fabe15096e8de4869ec6a9a1.jpg)
126
+ Figure 7. Time chart of our camera. Exposure timing is different for four vertically divided regions on image sensor.
127
+
128
+ Accordingly, the unit time for the target light field was also 17 ms (58.8 fps). Meanwhile, a single exposure time of the camera ranged over the 4 time units (temporal sub-frames), and thus, the interval between the two exposed images was 68 ms (14.7 fps in terms of the camera's frame rate).
129
+
130
+ We mention several restrictions resulting from the image sensor's hardware. First, the sensor was not equipped with RGB filters and was thus incapable of obtaining color information. Second, the coding patterns were not freely designable, because they were generated by the column-wise and row-wise control signals repeating for every $8 \times 8$ pixels. Therefore, the applicable coding patterns were limited to binary, $8 \times 8$ -pixels periodic, and row-column separable ones. This restriction was considered in our network design as mentioned later. Finally, due to the timing of the vertical scan, the time duration covered by a single exposed image depended on the vertical position. More precisely, as shown in Fig. 7, the image sensor was vertically divided into 4 regions, each of which had a distinctive exposure timing with $17~\mathrm{ms}$ differences from the neighbors. Accordingly, these regions were modulated by the same four sets of coding patterns but in different orders. To accommodate these differences, we used a single instance for AcqNet, but permuted the order of time units in the input light field for the 4 regions, respectively. We prepared 4 instances of RecNet corresponding to the 4 regions and jointly trained them with the coding patterns. This extension required four regionwise reconstruction processes conducted in parallel, but still maintained $\times 4$ finer temporal resolution than the camera.
131
+
132
+ ![](images/bae79460aa28238fb9f05d936008d45f734bc4021777bfe62999b377f14f85ed.jpg)
133
+ Figure 8. Our network architecture consists of AcqNet and RecNet, which correspond to coded image acquisition and light-field reconstruction processes, respectively. Dynamic light-field ranging over four temporal units is processed at once.
134
+
135
+ # 3.4. Network Design and Training
136
+
137
+ As shown in Fig. 8, our method was implemented as a fully convolutional network, consisting of AcqNet and RecNet. AcqNet is a differentiable representation of the image formation model with trainable coding patterns, where a target light field is compressed into a single observed image. RecNet was designed to receive the observed image as input and reconstruct the original light field. The entire network was trained end-to-end using the squared error against the ground-truth light field as the loss function. By doing so, the image acquisition and light-field reconstruction processes were jointly optimized. When a real camera was used, the coding patterns for the camera were tuned in accordance with the trained parameters of AcqNet. Then, image acquisition was conducted physically on the imaging hardware, and only the reconstruction (inference on RecNet) was performed on the computer.
138
+
139
+ AcqNet takes as input a dynamic light field over 4 consecutive time units, which has $N_{x} \times N_{y}$ pixels and $5 \times 5$ viewpoints over 4 time units. The viewpoint dimensions are unfolded into a single channel, resulting in 4 input tensors with the shape of $25 \times N_{x} \times N_{y}$ . The first block of AcqNet corresponds to the aperture coding (Eq. (3)). To implement this process, we followed Inagaki et al. [12]; we used 2-D convolutional layers with $1 \times 1$ kernels and no biases, where each kernel weight corresponds to the apertures' transmittance for each viewpoint. We prepared 4 separate convolutional layers for the 4 time units, in each of which 25 channels were reduced into a single channel. The outputs from these layers are stacked along the channel dimension, resulting in a tensor of $4 \times N_{x} \times N_{y}$ . The second block corresponds to the pixel-wise exposure coding (Eq. (4)), where $8 \times 8$ repetitive patterns are applied. For this process, we prepared 64 separate convolutional layers $(1 \times 1$ kernels without biases), each of which takes a tensor of $4 \times N_{x} / 8 \times N_{y} / 8$ as input (every $8 \times 8$ pixels extracted
140
+
141
+ from the tensor of $4 \times N_x \times N_y$ and reduces 4 channels into a single channel. To constrain the coding patterns to be hardware implementable (binary and row-column separable), we used the same training technique as Yoshida et al. [55] (see section 4.1 in [55]). The outputs from these layers are stacked along the channel dimension, resulting in a tensor of $64 \times N_x / 8 \times N_y / 8$ , which is equivalent to a single observed image with $N_x \times N_y$ pixels. Finally, to account for noise during the acquisition process, Gaussian noise (zero-mean and $\sigma = 0.005$ w.r.t. the range of pixel values [0, 1]) is added to the observed image.
142
+
143
+ RecNet accepts an output from AcqNet (or an image acquired from a real camera) as a tensor of $64 \times N_x / 8 \times N_y / 8$ . The first 5 convolutional layers gradually increase the number of channels to 256, while keeping the spatial size unchanged. Then, the tensor is reshaped into $4 \times N_x \times N_y$ using a pixel shuffling operation [32]. The subsequent two convolutional layers increase the number of channels to 100, followed by 19 convolutional layers and a residual connection for refinement. The output from RecNet is the latent dynamic light field represented as a tensor of $100 \times N_x \times N_y$ , where 100 channels correspond to $5 \times 5$ views over 4 time units (temporal sub-frames). As mentioned in 3.3, four instances of RecNet should be used in parallel to handle the time differences among the four vertical regions.
144
+
145
+ We finally mention the training dataset. We first collected 223,020 light-field patches from 51 static light fields with intensity augmentation. Next, following Sakai et al. [31], we gave 2-D lateral motions (in-plane translations) to the collected patches to synthesize virtually-moving lightfield samples. We used linear motions with constant velocities: $(\alpha_{x},\alpha_{y})$ [pixels per unit time], where $\alpha_{x},\alpha_{y}\in \{-2,1,0,1,2\}$ ; this is equivalent to at most $\pm 8$ pixel translation per frame in terms of the camera's frame rate. This motion model was simple and limited, but it would be sufficient for the motions within a single exposure time, which is short enough. We had 25 motion patterns in total, all
146
+
147
+ of which were applied to each light-field patch. To sum up, we had 5,575,500 samples of dynamic light fields, each with $64 \times 64$ pixels at $5 \times 5$ viewpoints over 4 time units. Note that even a single training sample had a significant size (409,600 elements), which necessitated the network to be lightweight.
148
+
149
+ We implemented our software using PyTorch. The network was trained over five epochs using the Adam optimizer. The training took approximately seven days on a PC equipped with NVIDIA Geforce RTX 3090. We also trained our model with $8 \times 8$ views and different ranges for the assumed motions $(\alpha_{x}, \alpha_{y})$ . Please refer to the supplementary material for details.
150
+
151
+ # 4. Experiments
152
+
153
+ We conducted several quantitative evaluations using a computer generated scene and experiments using our prototype camera. To summarize, we succeeded in acquiring a dynamic light field with $4 \times$ finer temporal resolution than the camera itself. Note that there is no baseline to compete against, because to our knowledge, no prior works have ever achieved the same goal as ours. Please refer to the supplementary video for better visualization of our results.
154
+
155
+ # 4.1.Quantitative Evaluation
156
+
157
+ Ablation study for the coding method. To validate our image acquisition model in Eq. (2), we need to analyze the effect of coding on the aperture $(a(u,v,t))$ and pixel $(p_{x,y}(t))$ planes. In addition to our original method (denoted as $\mathbf{A} + \mathbf{P}$ ), we trained three variants of our methods as follows. Ordinary: no coding was applied $(a(u,v,t) = \mathrm{const}, p_{x,y}(t) = \mathrm{const})$ , which corresponded to light-field reconstruction from a single uncoded image. A-only: only the aperture coding was enabled $(p_{x,y}(t) = \mathrm{const})$ . P-only: only the pixel-wise exposure coding was enabled $(a(u,v,t) = \mathrm{const})$ . Furthermore, to evaluate the theoretical upper-bound, we also prepared a free-form coding over the 5-D space (denoted as Free5D), given by:
158
+
159
+ $$
160
+ I _ {x, y} = \sum_ {(u, v, t) \in S _ {u} \times S _ {v} \times S _ {t}} m (x, y, u, v, t) L _ {x, y} (u, v, t) \tag {6}
161
+ $$
162
+
163
+ where $m(x,y,u,v,t) \in [0,1]$ was a fully trainable modulating pattern periodic over $8 \times 8$ pixels. Note that this is only a software simulation; no hardware realization is available. The five methods mentioned so far were different in the imaging models but aimed for the same goal: reconstructing a dynamic light field (5×5 views over 4 time units) from a single observed image. For all the methods, RecNets with the same network structure were jointly trained with the respective coding patterns on the same training dataset for the same number of epochs.
164
+
165
+ For quantitative evaluation, we used a computer generated light field with $5 \times 5$ viewpoints over 200 temporal
166
+
167
+ frames, which was rendered from Planets scene provided by Sakai et al. [31].<sup>3</sup> Figure 9 visualizes several reconstructed views (at the top-left viewpoint), horizontal epipolar plane images (EPIs) along the green lines, and the differences from the ground truth ( $\times$ 3 pixel values). The average peak signal-to-noise ratio (PSNR) values over the 25 viewpoints are plotted along the temporal frames in Fig. 10.
168
+
169
+ As observed from these results, our method clearly outperformed the other variants and even achieved quality close to the ideal Free5D case. Meanwhile, A-only and P-only resulted in poor reconstruction quality, showing their insufficiency as coding methods. Moreover, the poor result from Ordinary case indicated that although implicit scene priors were learned from the training dataset, they alone were insufficient for high-quality reconstruction. In contrast, the success of our method can be attributed to the elaborated coding method that was simultaneously applied on the aperture and imaging planes, which helped effectively embed the original 5-D information into a single observed image. However, the reconstruction quality of our method exhibited small fluctuations over time. This was closely related to the fact that four time units (temporal frames) were processed as a group. Moreover, our method did not include mechanisms that could explicitly encourage the temporal consistency, which will be addressed in the future work.
170
+
171
+ Working range analysis. We also evaluated the effective working range against motion and disparity using a primitive plane scene. Following Eq. (5), we synthesized a dynamic light field over four time units by using a natural image in Fig. 11 (left) as the texture. The average PSNR values obtained with our method $(\mathrm{A} + \mathrm{P})$ and the three variants (A-only, P-only, and Ordinary) are shown in Fig. 11 (right). Obviously, our method $(\mathrm{A} + \mathrm{P})$ can cover a wider range of motion/disparity values than the others; P-only performed poorly for $d \neq 0$ ; A-only and Ordinary did not work well except for $d = \alpha_{x} = 0$ .
172
+
173
+ In our method (A+P), the reconstruction quality degraded gradually as the velocity and disparity values increased. This means that large motions/disparities are challenging for our method. The working range for the disparity was mainly determined by the 3-D scene structures contained in the original light-field dataset, while the working range for the velocity was related to the virtual motions we assumed when synthesizing the dynamic dataset from static light fields. Note that our imaging system has densely-located viewpoints (bounded by the aperture) and a high temporal resolution ( $4 \times$ the frame-rate of the camera); therefore, both the motion and disparity are usually limited within a small range.
174
+
175
+ Comparison with other methods. We finally compared our method against three other methods. The first two methods [6, 31] were based on coded-aperture imaging. From
176
+
177
+ ![](images/df8745ff8d663a3dedcf5b3bb45d217d866d9ced76404bb7290648d18ffce193.jpg)
178
+ Ground truth (50-th frame)
179
+
180
+ ![](images/4ed143c46cb8d6db30c0f75d97f5660357be37c11e87de5faef0ff4db28a375f.jpg)
181
+
182
+ ![](images/9ab8d782fbec12f61798b575c95f84384c4d6c5402ceee12caf9d1fd9bcb7ae3.jpg)
183
+ A+P (ours)
184
+
185
+ ![](images/595938322433f919009babf5f4cfbb667f7a7d1631d102d4b07fa7e707c825ad.jpg)
186
+
187
+ ![](images/f9d25f0df1db1d5818618bae8df639e5feebeaf0ae2a8394315b7a483c6f641c.jpg)
188
+ Free5D
189
+
190
+ ![](images/cba13365c8890557a270de3d57cda4871d4dae4c594b58132c3f616a977f2381.jpg)
191
+
192
+ ![](images/2849223f20caf4794472d6a9aca7ce0128accb5ebbe8bc22b015b866132597d9.jpg)
193
+
194
+ ![](images/c41ebc81fcb3e5b6ce9112f19f19c7f3fefe7d8baeee747bf49dcdb0af5406f4.jpg)
195
+
196
+ ![](images/0a9888b3a215a48136c3e18f48696a8415b8a93c93180bba18c55e30772d4858.jpg)
197
+ P-only
198
+
199
+ ![](images/3dfce96e5c9008ab92d3b0c6595b7ed94250d1e278790b0f26a9dd9dfc504dd7.jpg)
200
+
201
+ ![](images/743c0e54b0ebc8ed507b3c62edb166a10f7fcebdb74386b6b9c17b5b616e2559.jpg)
202
+ Ordinary
203
+
204
+ ![](images/166d5c66617354237607ae66f976bc82533e3e855fe5c62ae6e77fe510d11820.jpg)
205
+ Ground truth (100-th frame)
206
+
207
+ ![](images/3261e7df008490248bbd669574f9276244a1b144bb032bf12a70cee46537f19b.jpg)
208
+
209
+ ![](images/960abdcce90a15ca4d9acf67c82698074033c2e6bcacf4389fd2a35de33de73d.jpg)
210
+ A+P (ours)
211
+
212
+ ![](images/e7bcd96ee7ae0780c0c87ac54bd466d92101feb2aa0930e818467f2ccafef9bb.jpg)
213
+
214
+ ![](images/1335a292f2ce8dac4868bd46298c05a89c6db70c37e0ff1718b61bb1f3e3c1ea.jpg)
215
+ Free5D
216
+
217
+ ![](images/5054cf0af6aa7c22d6e55cc4990199a127f64d325a6964ab079de20ad8aa96b4.jpg)
218
+ A-only
219
+
220
+ ![](images/9e2987b8066bb899141d8495fae1833c7fefff8d4921e474310dcc2401aa1a87.jpg)
221
+ A-only
222
+
223
+ ![](images/1aa3f089fe646947bc4e8e6a99b063fc253667bf74b9521260f3e68ca4f4eb7b.jpg)
224
+
225
+ ![](images/94b4ad983ac525495b616375c3cfd31d6359594f602eb2e3d05c4735e70be89d.jpg)
226
+ P-only
227
+
228
+ ![](images/704e655df40814218e2ff2bb9193a8c163dab331262edab33a0378f2fc07ec19.jpg)
229
+
230
+ ![](images/4a2fffcf88d6d9a0065fbb987c81b7d483a2a66b399f2ba291856c92d1ac9e43.jpg)
231
+ Ordinary
232
+
233
+ ![](images/0ef1213b4c11656c4cf285a281ca7f472cad052f780cf8df1e1e346a1e332a65.jpg)
234
+ Figure 9. Visual results of our method (A+P), Free5D (ideal case), and three ablation cases (A-only, P-only, and Ordinary). Reconstructed top-left views are accompanied with horizontal EPIs along green lines and differences from ground truth ( $\times$ 3 brightness).
235
+ Figure 10. Quantitative reconstruction quality over time for our method (A+P), Free5D (ideal case), and three ablation cases (A-only, P-only, and Ordinary).
236
+
237
+ Guo et al. [6], we adopted a model where a light field for each time unit was reconstructed from a single observed image, which resulted in frame-by-frame observation and light-field reconstruction at the same frame rate as the camera. The method of Sakai et al. [31] observed three consecutive images over time, and reconstructed a light field for the central time. The light field was reconstructed for every two
238
+
239
+ frames (at $0.5 \times$ the frame rate) of the camera. We retrained Guo et al.'s and Sakai et al.'s on the same dataset as ours until convergence. In addition, we simulated a Lytro-like camera, where each of the $5 \times 5$ views was captured with the $1/5 \times 1/5$ spatial resolution at the same frame rate as the camera. The acquired $5 \times 5$ views were upsampled to the original resolution using bicubic interpolation and compared against the ground truth.
240
+
241
+ For quantitative evaluation, we used Planets assuming the camera's frame rate to be the same as ours; accordingly, in these three methods, image acquisition was conducted only at every four temporal frames. Note that only our method can obtain the light field at $4\times$ the frame rate of the camera, and thus, this comparison only serves as a reference. The average PSNR values over time are shown in Fig. 12. The method of Sakai et al. [31] failed to follow the fast scene motions, resulting in poor reconstruction quality. The method of Guo et al. [6] reconstructed a finely textured but geometrically inconsistent result, whereas the Lytro-like camera produced a geometrically consistent but blurred result. Our method achieved the best reconstruction quality with $\times 4$ finer temporal resolution than the camera.
242
+
243
+ Please refer to the supplementary material for more detailed analysis with different training conditions.
244
+
245
+ ![](images/c847d051d203d93aaea4106f02fe14bb934a305eb45e8ae47f083756607d998f.jpg)
246
+ Figure 11. Performance evaluation against various motion and disparity values on primitive plane scene.
247
+
248
+ ![](images/8b837d9b3534175dabb8ef881aa3f188a7a7f2c017765343c5d9ff4b4804217a.jpg)
249
+ Figure 12. Quantitative quality over time compared against other methods (Guo et al. [6], Sakai et al. [31], and Lytro-like camera).
250
+
251
+ ![](images/3a099aab62ff644942fd869171beaa34e4131625ef42eda40d3981b20a2c862d.jpg)
252
+ Experimental setup
253
+
254
+ ![](images/e1f0b30d1a56090e50b89dd6d6287d378ee20a0cb038d3f84a178addec889a4d.jpg)
255
+ Reconstructed light field
256
+ Figure 13. Experiment using our prototype camera: experimental setup (left) and reconstructed top-left view accompanied by two EPIs along green and blue lines (center), and reconstructed light field with $5 \times 5$ views (right).
257
+
258
+ # 4.2. Experiment Using Camera Prototype
259
+
260
+ We prepared a target scene by using several objects (miniature animals) placed on an electronic turntable, which produced motions in various directions. Our prototype camera was used to capture the scene at 14.7 fps, from which we reconstructed the dynamic light field at 58.8 fps (4 temporal frames from each exposed image). The reconstructed light field had $5 \times 5$ views, each with the full-sensor resolution $(656 \times 512$ pixels) for each time unit. Our experimental setup and a part of the results are shown in Fig. 13. The reconstructed light field exhibited natural motions over time and consistent parallaxes among the viewpoints (refer to the supplementary video).
261
+
262
+ # 5. Conclusions
263
+
264
+ We proposed a method for compressively acquiring a dynamic light field (a 5-D volume) through a single-shot coded image (a 2-D measurement). Our method was embodied as a camera that synchronously applied aperture
265
+
266
+ coding and pixel-wise exposure coding within a single exposure time, combined with a deep-learning-based algorithm for light-field reconstruction. The coding patterns were jointly optimized with the reconstruction algorithm, so as to embed as much of the original information as possible in a single observed image. Experimental results showed that by using a single camera alone, our method can successfully acquire a dynamic light field with $5 \times 5$ views at $4 \times$ the frame rate of the camera. We believe this is a significant advance in the context of compressive light-field acquisition, which will motivate the computational photography community to investigate further. Our future work will include improvement on the network design for better reconstruction quality and generalization to different configurations concerning the number of views and the number of time units included in a single exposure time.
267
+
268
+ Acknowledgement: Special thanks go to Yukinobu Sugiyama and Kenta Endo at Hamamatsu Photonics K.K. for providing the image sensor.
269
+
270
+ # References
271
+
272
+ [1] Edward H Adelson and John YA Wang. Single lens stereo with a plenoptic camera. IEEE transactions on pattern analysis and machine intelligence, 14(2):99-106, 1992. 1
273
+ [2] Jun Arai, Fumio Okano, Haruo Hoshino, and Ichiro Yuyama. Gradient-index lens-array method based on real-time integral photography for three-dimensional images. Applied optics, 37(11):2034-2045, 1998. 1
274
+ [3] S. Derin Babacan, Reto Ansorge, Martin Luessi, Pablo Ruiz Mataran, Rafael Molina, and Aggelos K Katsaggelos. Compressive light field sensing. IEEE Transactions on image processing, 21(12):4746-4757, 2012. 1, 2
275
+ [4] Ayan Chakrabarti. Learning sensor multiplexing design through back-propagation. In International Conference on Neural Information Processing Systems, pages 3089-3097, 2016. 2
276
+ [5] Toshiaki Fujii, Kensaku Mori, Kazuya Takeda, Kenji Mase, Masayuki Tanimoto, and Yasuhito Suenaga. Multipoint measuring system for video and sound - 100-camera and microphone system. In IEEE International Conference on Multimedia and Expo, pages 437-440, 2006. 1
277
+ [6] Mantang Guo, Junhui Hou, Jing Jin, Jie Chen, and Lap-Pui Chau. Deep spatial-angular regularization for compressive light field reconstruction over coded apertures. In European Conference on Computer Vision, pages 278-294, 2020. 1, 2, 6, 7, 8
278
+ [7] Mayank Gupta, Arjun Jauhari, Kuldeep Kulkarni, Suren Jayasuriya, Alyosha Molnar, and Pavan Turaga. Compressive light field reconstructions using deep learning. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1277-1286, 2017. 1, 2
279
+ [8] Saghi Hajisharif, Ehsan Miandji, Christine Guillemot, and Jonas Unger. Single sensor compressive light field video camera. Computer Graphics Forum, 39(2):463-474, 2020. 1
280
+ [9] Yasunobu Hitomi, Jinwei Gu, Mohit Gupta, Tomoo Mitsunaga, and Shree K. Nayar. Video from a single coded exposure photograph using a learned over-complete dictionary. In International Conference on Computer Vision, pages 287-294, 2011. 2
281
+ [10] Ronghang Hu, Nikhila Ravi, Alexander C. Berg, and Deepak Pathak. Worldsheet: Wrapping the world in a 3D sheet for view synthesis from a single image. In International Conference on Computer Vision, 2021. 2
282
+ [11] Michael Iliadis, Leonidas Spinoulas, and Aggelos K. Katsaggelos. Deepbinarymask: Learning a binary mask for video compressive sensing, 2016. 2
283
+ [12] Yasutaka Inagaki, Yuto Kobayashi, Keita Takahashi, Toshiaki Fujii, and Hajime Nagahara. Learning to capture light fields through a coded aperture camera. In European Conference on Computer Vision, pages 418-434, 2018. 1, 2, 5
284
+ [13] Aaron Isaksen, Leonard McMillan, and Steven J. Gortler. Dynamically reparameterized light fields. In ACM SIGGRAPH, pages 297-306, 2000. 1
285
+ [14] Seungjae Lee, Changwon Jang, Seokil Moon, Jaebum Cho, and Byoungho Lee. Additive light field displays: realiza
286
+
287
+ tion of augmented reality with holographic optical elements. ACM Transactions on Graphics, 35(4):1-13, 2016. 1
288
+ [15] Yuqi Li, Miao Qi, Rahul Gulve, Mian Wei, Roman Genov, Kiriakos N. Kutulakos, and Wolfgang Heidrich. End-to-end video compressive sensing using anderson-accelerated unrolled networks. In International Conference on Computational Photography, pages 137-148, 2020. 2
289
+ [16] Chia-Kai Liang, Tai-Hsu Lin, Bing-Yi Wong, Chi Liu, and Homer H Chen. Programmable aperture photography: multiplexed light field acquisition. ACM Transactions on Graphics, 27(3):1-10, 2008. 1, 2
290
+ [17] Kazuki Maeno, Hajime Nagahara, Atsushi Shimada, and Rin-Ichiro Taniguchi. Light field distortion feature for transparent object recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2786-2793, 2013. 1
291
+ [18] Kshitij Marwah, Gordon Wetzstein, Yosuke Bando, and Ramesh Raskar. Compressive light field photography using overcomplete dictionaries and optimized projections. ACM Transactions on Graphics, 32(4):1-12, 2013. 1, 2
292
+ [19] Ehsan Miandji, Saghi Hajisharif, and Jonas Unger. A unified framework for compression and compressed sensing of light fields and light field videos. ACM Transactions on Graphics, 38(3):1-18, 2019. 2
293
+ [20] Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Transactions on Graphics, 38:1-14, 2019. 1
294
+ [21] Vishal Monga, Yuelong Li, and Yonina C. Eldar. Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing. IEEE Signal Processing Magazine, 38(2):18-44, 2021. 2
295
+ [22] Ofir Nabati, David Mendlovic, and Raja Giryes. Fast and accurate reconstruction of compressed color light field. In International Conference on Computational Photography, pages 1-11, 2018. 1, 2
296
+ [23] Hajime Nagahara, Changyin Zhou, Takuya Watanabe, Hiroshi Ishiguro, and Shree K Nayar. Programmable aperture camera using LCoS. In European Conference on Computer Vision, pages 337-350, 2010. 1, 2
297
+ [24] Ren Ng. Digital light field photography. PhD thesis, Stanford University, 2006. 1
298
+ [25] Ren Ng, Marc Levoy, Mathieu Brédif, Gene Duval, Mark Horowitz, and Pat Hanrahan. Light field photography with a hand-held plenoptic camera. Computer Science Technical Report CSTR, 2(11):1-11, 2005. 1
299
+ [26] Shijie Nie, Lin Gu, Yinqiang Zheng, Antony Lam, Nobutaka Ono, and Imari Sato. Deeply learned filter response functions for hyperspectral reconstruction. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4767-4776, 2018. 2
300
+ [27] Simon Niklaus, Long Mai, Jimei Yang, and Feng Liu. 3D ken burns effect from a single image. ACM Transactions on Graphics, 38(6):1-15, 2019. 2
301
+ [28] Ramesh Raskar, Amit Agrawal, and Jack Tumblin. Coded exposure photography: Motion deblurring using fluttered shutter. ACM Transactions on Graphics, 25(3):795-804, 2006. 2
302
+
303
+ [29] Raytrix:. 3D light field camera technology, 2021. https://www.raytrix.de/.1
304
+ [30] Dikpal Reddy, Ashok Veeraraghavan, and Rama Chellappa. P2C2: Programmable pixel compressive camera for high speed imaging. In IEEE Conference on Computer Vision and Pattern Recognition, pages 329-336, 2011. 2
305
+ [31] Kohei Sakai, Keita Takahashi, Toshiaki Fujii, and Hajime Nagahara. Acquiring dynamic light fields through coded aperture camera. In European Conference on Computer Vision, pages 368-385, 2020. 1, 2, 5, 6, 7, 8
306
+ [32] Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P. Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1874-1883, 2016. 5
307
+ [33] Meng-Li Shih, Shih-Yang Su, Johannes Kopf, and Jia-Bin Huang. 3D photography using context-aware layered depth inpainting. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020. 2
308
+ [34] Changha Shin, Hae-Gon Jeon, Youngjin Yoon, In So Kweon, and Seon Joo Kim. EPINET: A fully-convolutional neural network using epipolar geometry for depth from light field images. In IEEE Conference on Computer Vision and Pattern Recognition, pages 4748–4757, 2018. 1
309
+ [35] Pratul P. Srinivasan, Tongzhou Wang, Ashwin Sreelal, Ravi Ramamoorthi, and Ren Ng. Learning to synthesize a 4D RGBD light field from a single image. In IEEE International Conference on Computer Vision, pages 2262-2270, 2017. 2
310
+ [36] He Sun, Adrian V. Dalca, and Katherine L. Bouman. Learning a probabilistic strategy for computational imaging sensor selection. In International Conference on Computational Photography, pages 81–92, 2020. 2
311
+ [37] Yuichi Taguchi, Takafumi Koike, Keita Takahashi, and Takeshi Naemura. TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters. IEEE Transactions on Visualization and Computer Graphics, 15(5):841-852, 2009. 1
312
+ [38] Keita Takahashi, Yuto Kobayashi, and Toshiaki Fujii. From focal stack to tensor light-field display. IEEE Transactions on Image Processing, 27(9):4571-4584, 2018. 1
313
+ [39] Salil Tambe, Ashok Veeraraghavan, and Amit Agrawal. Towards motion aware light field video for dynamic scenes. In IEEE International Conference on Computer Vision, pages 1009-1016, 2013. 1
314
+ [40] Richard Tucker and Noah Snavely. Single-view view synthesis with multiplane images. In IEEE Conference on Computer Vision and Pattern Recognition, 2020. 2
315
+ [41] Anil Kumar Vadathya, Sharath Girish, and Kaushik Mitra. A unified learning based framework for light field reconstruction from coded projections. IEEE Transactions on Computational Imaging, 6:304-316, 2019. 1, 2
316
+ [42] Edwin Vargas, Julien N. P. Martel, Gordon Wetzstein, and Henry Arguello. Time-multiplexed coded aperture imaging: Learned coded aperture and pixel exposures for compressive imaging systems. In International Conference on Computer Vision, 2021. 2
317
+
318
+ [43] Ashok Veeraraghavan, Ramesh Raskar, Amit Agrawal, Ankit Mohan, and Jack Tumblin. Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Transactions on Graphics, 26(3):69, 2007. 1
319
+ [44] Ashwin Wagadarikar, Renu John, Rebecca Willett, and David Brady. Single disperser design for coded aperture snapshot spectral imaging. Appl. Opt., 47(10):B44-B51, 2008. 2
320
+ [45] Ting-Chun Wang, Jun-Yan Zhu, Ebi Hiroaki, Manmohan Chandraker, Alexei Efros, and Ravi Ramamoorthi. A 4D light-field dataset and cnn architectures for material recognition. In European Conference on Computer Vision, volume 9907, pages 121-138, 2016. 1
321
+ [46] Ting-Chun Wang, Jun-Yan Zhu, Nima Khademi Kalantari, Alexei A. Efros, and Ravi Ramamoorthi. Light field video capture using a learning-based hybrid imaging system. ACM Transactions on Graphics, 36(4):133:1-133:13, 2017. 1
322
+ [47] Yunlong Wang, Fei Liu, Zilei Wang, Guangqi Hou, Zhenan Sun, and Tieniu Tan. End-to-end view synthesis for light field imaging with pseudo 4DCNN. In European Conference on Computer Vision, 2018. 2
323
+ [48] Mian Wei, Navid Sarhangnejad, Zhengfan Xia, Nikita Gusev, Nikola Katic, Roman Genov, and Kiriakos N. Kutulakos. Coded two-bucket cameras for computer vision. In European Conference on Computer Vision, pages 55-73, 2018. 2
324
+ [49] Bennett Wilburn, Neel Joshi, Vaibhav Vaish, Eino-Ville Talvala, Emilio Antunez, Adam Barth, Andrew Adams, Mark Horowitz, and Marc Levoy. High performance imaging using large camera arrays. ACM Transactions on Graphics, 24(3):765-776, 2005. 1
325
+ [50] Olivia Wiles, Georgia Gkioxari, Richard Szeliski, and Justin Johnson. Synsin: End-to-end view synthesis from a single image. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020. 2
326
+ [51] W. Williem, In Kyu Park, and Kyoung Mu Lee. Robust light field depth estimation using occlusion-noise aware data costs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(10):2484-2497, 2018. 1
327
+ [52] Yicheng Wu, Vivek Boominathan, Huaijin Chen, Aswin Sankaranarayanan, and Ashok Veeraraghavan. Phasecam3D — learning phase masks for passive single view depth estimation. In International Conference on Computational Photography, pages 1–12, 2019. 2
328
+ [53] Henry Wing Fung Yeung, Junhui Hou, Xiaoming Chen, Jie Chen, Zhibo Chen, and Yuk Ying Chung. Light field spatial super-resolution using deep efficient spatial-angular separable convolution. IEEE Transactions on Image Processing, 28(5):2319-2330, 2019. 2
329
+ [54] Michitaka Yoshida, Toshiki Sonoda, Hajime Nagahara, Kenta Endo, Yukinobu Sugiyama, and Rin-ichiro Taniguchi. High-speed imaging using CMOS image sensor with quasi pixel-wise exposure. IEEE Transactions on Computational Imaging, 6:463–476, 2020. 1, 2, 4
330
+ [55] Michitaka Yoshida, Akihiko Torii, Masatoshi Okutomi, Kenta Endo, Yukinobu Sugiyama, Rin-ichiro Taniguchi, and Hajime Nagahara. Joint optimization for compressive video
331
+
332
+ sensing and reconstruction under hardware constraints. In European Conference on Computer Vision, 2018. 5
333
+ [56] Xin Yuan, David J. Brady, and Aggelos K. Katsaggelos. Snapshot compressive imaging: Theory, algorithms, and applications. IEEE Signal Processing Magazine, 38(2):65-88, 2021. 2
334
+ [57] Xin Yuan, Yang Liu, Jinli Suo, and Qionghai Dai. Plug-and-play algorithms for large-scale snapshot compressive imaging. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1444-1454, 2020. 2
335
+ [58] Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. Stereo magnification: Learning view synthesis using multiplane images. ACM Transactions on Graphics, 37:1-12, 2018. 1
acquiringadynamiclightfieldthroughasingleshotcodedimage/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c13767cbab2350287f3a8695040e973bbb7044e853826bebca4caf87f6cdbc9
3
+ size 644747
acquiringadynamiclightfieldthroughasingleshotcodedimage/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88b827570d971d0022099fc6ddffeaba9ac934bded24e9406a3a1b0fd32ce1b3
3
+ size 436843
activelearningbyfeaturemixing/eca455da-15ce-41de-885d-779d788a4780_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7f7bb8fe0493df1d3e9bfb26c20e707801f1c0da9e6df015b66a824845d7c04
3
+ size 78484
activelearningbyfeaturemixing/eca455da-15ce-41de-885d-779d788a4780_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d42d4c0f640296066877d2887ee8a177d0d0e5037ed5fd84213a6b9044e0420d
3
+ size 98590