SlowGuess commited on
Commit
f132767
·
verified ·
1 Parent(s): 9947699

Add Batch bc1ea508-71ea-44e2-bc59-1e612a60d4fa

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 1000fpshdrvideowithaspikergbhybridcamera/1c93f555-c37f-43ed-866a-0e7c5d4458e6_content_list.json +3 -0
  2. 1000fpshdrvideowithaspikergbhybridcamera/1c93f555-c37f-43ed-866a-0e7c5d4458e6_model.json +3 -0
  3. 1000fpshdrvideowithaspikergbhybridcamera/1c93f555-c37f-43ed-866a-0e7c5d4458e6_origin.pdf +3 -0
  4. 1000fpshdrvideowithaspikergbhybridcamera/full.md +300 -0
  5. 1000fpshdrvideowithaspikergbhybridcamera/images.zip +3 -0
  6. 1000fpshdrvideowithaspikergbhybridcamera/layout.json +3 -0
  7. 1vs100parameterefficientlowrankadapterfordensepredictions/3b75c6c9-33bc-4e41-9df3-2e14ac85ef59_content_list.json +3 -0
  8. 1vs100parameterefficientlowrankadapterfordensepredictions/3b75c6c9-33bc-4e41-9df3-2e14ac85ef59_model.json +3 -0
  9. 1vs100parameterefficientlowrankadapterfordensepredictions/3b75c6c9-33bc-4e41-9df3-2e14ac85ef59_origin.pdf +3 -0
  10. 1vs100parameterefficientlowrankadapterfordensepredictions/full.md +359 -0
  11. 1vs100parameterefficientlowrankadapterfordensepredictions/images.zip +3 -0
  12. 1vs100parameterefficientlowrankadapterfordensepredictions/layout.json +3 -0
  13. 2pcnettwophaseconsistencytrainingfordaytonightunsuperviseddomainadaptiveobjectdetection/818b1ea7-c7c2-488e-9c91-78c9a94fffa2_content_list.json +3 -0
  14. 2pcnettwophaseconsistencytrainingfordaytonightunsuperviseddomainadaptiveobjectdetection/818b1ea7-c7c2-488e-9c91-78c9a94fffa2_model.json +3 -0
  15. 2pcnettwophaseconsistencytrainingfordaytonightunsuperviseddomainadaptiveobjectdetection/818b1ea7-c7c2-488e-9c91-78c9a94fffa2_origin.pdf +3 -0
  16. 2pcnettwophaseconsistencytrainingfordaytonightunsuperviseddomainadaptiveobjectdetection/full.md +301 -0
  17. 2pcnettwophaseconsistencytrainingfordaytonightunsuperviseddomainadaptiveobjectdetection/images.zip +3 -0
  18. 2pcnettwophaseconsistencytrainingfordaytonightunsuperviseddomainadaptiveobjectdetection/layout.json +3 -0
  19. 3davatarganbridgingdomainsforpersonalizededitableavatars/ddf7c6ad-f988-4a54-8cf6-7aff7d8dd81c_content_list.json +3 -0
  20. 3davatarganbridgingdomainsforpersonalizededitableavatars/ddf7c6ad-f988-4a54-8cf6-7aff7d8dd81c_model.json +3 -0
  21. 3davatarganbridgingdomainsforpersonalizededitableavatars/ddf7c6ad-f988-4a54-8cf6-7aff7d8dd81c_origin.pdf +3 -0
  22. 3davatarganbridgingdomainsforpersonalizededitableavatars/full.md +279 -0
  23. 3davatarganbridgingdomainsforpersonalizededitableavatars/images.zip +3 -0
  24. 3davatarganbridgingdomainsforpersonalizededitableavatars/layout.json +3 -0
  25. 3dawareconditionalimagesynthesis/b9625555-02d4-4da7-b507-7cd64cc67a00_content_list.json +3 -0
  26. 3dawareconditionalimagesynthesis/b9625555-02d4-4da7-b507-7cd64cc67a00_model.json +3 -0
  27. 3dawareconditionalimagesynthesis/b9625555-02d4-4da7-b507-7cd64cc67a00_origin.pdf +3 -0
  28. 3dawareconditionalimagesynthesis/full.md +382 -0
  29. 3dawareconditionalimagesynthesis/images.zip +3 -0
  30. 3dawareconditionalimagesynthesis/layout.json +3 -0
  31. 3dawarefaceswapping/66d1bee4-1a69-4f6f-8a65-3f5202fddfc5_content_list.json +3 -0
  32. 3dawarefaceswapping/66d1bee4-1a69-4f6f-8a65-3f5202fddfc5_model.json +3 -0
  33. 3dawarefaceswapping/66d1bee4-1a69-4f6f-8a65-3f5202fddfc5_origin.pdf +3 -0
  34. 3dawarefaceswapping/full.md +338 -0
  35. 3dawarefaceswapping/images.zip +3 -0
  36. 3dawarefaceswapping/layout.json +3 -0
  37. 3dawarefaciallandmarkdetectionviamultiviewconsistenttrainingonsyntheticdata/4aaf53b5-ffe9-4822-bbbc-9f293082f284_content_list.json +3 -0
  38. 3dawarefaciallandmarkdetectionviamultiviewconsistenttrainingonsyntheticdata/4aaf53b5-ffe9-4822-bbbc-9f293082f284_model.json +3 -0
  39. 3dawarefaciallandmarkdetectionviamultiviewconsistenttrainingonsyntheticdata/4aaf53b5-ffe9-4822-bbbc-9f293082f284_origin.pdf +3 -0
  40. 3dawarefaciallandmarkdetectionviamultiviewconsistenttrainingonsyntheticdata/full.md +359 -0
  41. 3dawarefaciallandmarkdetectionviamultiviewconsistenttrainingonsyntheticdata/images.zip +3 -0
  42. 3dawarefaciallandmarkdetectionviamultiviewconsistenttrainingonsyntheticdata/layout.json +3 -0
  43. 3dawaremulticlassimagetoimagetranslationwithnerfs/38da797f-7f59-48cd-af34-af72487f73d0_content_list.json +3 -0
  44. 3dawaremulticlassimagetoimagetranslationwithnerfs/38da797f-7f59-48cd-af34-af72487f73d0_model.json +3 -0
  45. 3dawaremulticlassimagetoimagetranslationwithnerfs/38da797f-7f59-48cd-af34-af72487f73d0_origin.pdf +3 -0
  46. 3dawaremulticlassimagetoimagetranslationwithnerfs/full.md +301 -0
  47. 3dawaremulticlassimagetoimagetranslationwithnerfs/images.zip +3 -0
  48. 3dawaremulticlassimagetoimagetranslationwithnerfs/layout.json +3 -0
  49. 3dawareobjectgoalnavigationviasimultaneousexplorationandidentification/e3176243-c1cd-415f-8bca-116983524509_content_list.json +3 -0
  50. 3dawareobjectgoalnavigationviasimultaneousexplorationandidentification/e3176243-c1cd-415f-8bca-116983524509_model.json +3 -0
1000fpshdrvideowithaspikergbhybridcamera/1c93f555-c37f-43ed-866a-0e7c5d4458e6_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c31e77748817f3c1e1f26d23754a88628e01b22d6a40a3cba8ea3a1cfa457fb6
3
+ size 80415
1000fpshdrvideowithaspikergbhybridcamera/1c93f555-c37f-43ed-866a-0e7c5d4458e6_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8aaba25ac0422bf5d7464fbf976bf5ef96ee29a7e73178a6121600df28593615
3
+ size 103574
1000fpshdrvideowithaspikergbhybridcamera/1c93f555-c37f-43ed-866a-0e7c5d4458e6_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91e1a22f7e48868eb2309c15ab691c0a8088f654066cc8bb02d1682ac9dea8e4
3
+ size 8925125
1000fpshdrvideowithaspikergbhybridcamera/full.md ADDED
@@ -0,0 +1,300 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 1000 FPS HDR Video with a Spike-RGB Hybrid Camera
2
+
3
+ Yakun Chang $^{1,2}$ Chu Zhou $^{3}$ Yuchen Hong $^{1,2}$ Liwen Hu $^{2}$ Chao Xu $^{3}$ Tiejun Huang $^{1,2}$ Boxin Shi $^{1,2*}$
4
+
5
+ $^{1}$ National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University
6
+ $^{2}$ National Engineering Research Center of Visual Technology, School of Computer Science, Peking University
7
+ $^{3}$ National Key Laboratory of General AI, School of Intelligence Science and Technology, Peking University {yakunchang, zhou_chu, huliwen, tjhuang, shiboxin}@pku.edu.cn yuchenhong.cn@gmail.com, xuchao@cis.pku.edu
8
+
9
+ # Abstract
10
+
11
+ Capturing high frame rate and high dynamic range (HFR&HDR) color videos in high-speed scenes with conventional frame-based cameras is very challenging. The increasing frame rate is usually guaranteed by using shorter exposure time so that the captured video is severely interfered by noise. Alternating exposures can alleviate the noise issue but sacrifice frame rate due to involving long-exposure frames. The neuromorphic spiking camera records high-speed scenes of high dynamic range without colors using a completely different sensing mechanism and visual representation. We introduce a hybrid camera system composed of a spiking and an alternating-exposure RGB camera to capture HFR&HDR scenes with high fidelity. Our insight is to bring each camera's superiority into full play. The spike frames, with accurate fast motion information encoded, are firstly reconstructed for motion representation, from which the spike-based optical flows guide the recovery of missing temporal information for long-exposure RGB images while retaining their reliable color appearances. With the strong temporal constraint estimated from spike trains, both missing and distorted colors cross RGB frames are recovered to generate time-consistent and HFR color frames. We collect a new Spike-RGB dataset that contains 300 sequences of synthetic data and 20 groups of real-world data to demonstrate 1000 FPS HDR videos outperforming HDR video reconstruction methods and commercial high-speed cameras.
12
+
13
+ # 1. Introduction
14
+
15
+ The spiking camera [17] and event camera [10] are neuromorphic sensors working differently from conventional frame-based digital cameras, which have many attractive characteristics, e.g., high-speed (perceiving scene
16
+
17
+ ![](images/60240d2e429f7bbaa254aba2a45842b1ba35e1ca5ec1952b36f6954e438ed27c.jpg)
18
+ Figure 1. (a) We build a spike-RGB hybrid camera system to achieve 1000 FPS HDR video reconstruction<sup>1</sup>. (b) The RGB camera uses alternating-exposure mode with a frame rate of 60 FPS, where $t_s$ , $4t_s$ , and $12t_s$ are the short, middle, and long exposure in our setup, respectively. The sampling frequency of the spiking camera is $20000\mathrm{Hz}$ .
19
+
20
+ radiance changes at the microsecond level), high dynamic range (HDR, $\geq 100$ dB). However, since they only record neuromorphic signals, i.e., spike trains [64] and event streams [25], which are less friendly to the human visual system and cannot be directly processed by CNN-based models for video frames [40, 41], preprocessing modules that convert neuromorphic signals into compatible formats are usually required when applying them to frame-based vision algorithms [61, 65]. In comparison with event streams, spike trains contain concrete textured information of scene radiances, which are more suitable for reconstructing high frame rate (HFR) videos [61-64]. However, since the spiking camera only encodes the absolute intensities of environments, colors are absent in the reconstructed video frames.
21
+
22
+ When capturing with a frame-based RGB camera, quality of recorded colors for each frame is determined by trading off the exposure time, ambient light, and target objects' moving speed [57]. For high-speed dynamic scenes, it often
23
+
24
+ requires to set shorter exposure time to guarantee a higher frame rate and avoid motion blur. In such a situation, since the exposure time is extremely short, the quality of video frames would be severely degenerated due to noise. Merging a burst of short-exposure images is a simple yet effective approach to reduce the noise level [8, 11], however, the color shift caused by noise is difficult to be corrected. Fusing alternating-exposure (using short, middle, and long exposures) RGB frames is commonly used for synthesizing well-exposed images [3, 19, 21]. However, they are not suitable for high-speed scenes. As illustrated in Fig. 1(b), given a sequence of alternating-exposure RGB images, the total time from the starting of the current exposure to the starting of the next frame, denoted by $T$ , is consistent for all frames, and it is composed of the exposure time $T_{\mathrm{exp}}$ and interval time $T_{\mathrm{itv}}$ (containing the readout and waiting time). It can be seen that the information during interval time is lost, and the frame rate they could achieve is thus limited to dozens of FPS. Another possible solution is to build a hybrid camera system to capture low frame rate (LFR) color sequence and high-speed neuromorphic signals simultaneously, then use the neuromorphic signals to interpolate [51, 52] and deblur [14, 18, 59] the RGB frames. However, the saturated regions are usually ignored, leaving the colors of the interpolated frames still unsatisfactory. HDR intensity map (does not contain any chromatic information) built from the neuromorphic signals can also be used to compensate the missing textures in the saturated regions [15]. But such an approach is not robust for scenes with large areas of saturated regions, due to the heavy reliance on the chrominance compensation network to hallucinate the color.
25
+
26
+ In this paper, we propose an all-in-one framework to reconstruct HRF (Fig. 1(a), at the level of 1000 FPS) color videos with high fidelity from the spike trains and a series of alternating-exposure frames captured by a Spike-RGB hybrid camera system simultaneously (Fig. 1(b)). To make full use of the color information in RGB images, we propose a three-stage strategy to deal with different situations using specific modules: (i) For the blurry middle- and long-exposure images, we design a spike guided deblurring module to recover the corresponding sharp images with faithful colors; (ii) for missing colors during the interval time, we design a spike guided interpolation module that exploits the abundant motion information (SC-Flow [16]) obtained from spike trains; (iii) for suppressing noise in short-exposure images and maintaining temporal consistency, we design a merging module, which exploits the variant of recurrent U-Net [42] as its backbone, to complete the HFR&HDR color video reconstruction process. To summarize, this paper makes contributions by proposing:
27
+
28
+ - an all-in-one framework to reconstruct high-speed HDR color video by jointly fusing spike trains and a sequence of alternating-exposure frames;
29
+
30
+ - a three-stage strategy fusing alternating exposures of RGB frames for the generation of well-exposure colors, via a recurrent convolution neural network for continuous frames interpolation guided by spike trains;
31
+ - a Spike-RGB hybrid camera system to demonstrate the applicability of the proposed method for capturing high-speed and high dynamic range scenes.
32
+
33
+ Experimental results show that the proposed method outperforms the state-of-the-art HDR video reconstruction method [3] and commercial cameras with the slow-motion photography capability in reconstructing 1000 FPS HDR color videos on synthetic data and real-world data.
34
+
35
+ # 2. Related Work
36
+
37
+ HDR image and video reconstruction. The most common way to reconstruct HDR images is to fuse a set of LDR images with bracketed exposures [7, 34]. Since the results for dynamic scenes often contain ghosting artifacts, image alignment [28, 45] and deep learning [20, 55] are employed to reconstruct sharp HDR images. To better reduce ghosting artifacts, Lee et al. [24] and Shaw et al. [46] apply the estimated motion information from a high frame rate sequence to facilitate the HDR image synthesis. Messikommer et al. [35] also achieve HDR reconstruction by combining bracketed-exposure RGB images and events. There are methods being designed for HDR reconstruction from a single image. These methods cannot recover the missing textures in clipped regions [9, 44]. Abhiram and Chan [1] reconstruct HDR images with a quanta image sensor (QIS). Han et al. [15] find that the reconstructed intensity maps from event streams and spike trains contain abundant textures saturated in LDR images. Therefore, they exploit intensity maps to guide HDR image restoration. For the capturing of HDR videos, many existing methods use specialized hardware, such as scanline exposure [13], per-pixel exposure [37], or multiple sensors [33, 50]. Due to the particularity of hardware, these methods are limited to narrow applications. Merging alternating-exposure image sequences is the most common yet effective way to reconstruct HDR videos [12, 19, 21, 22, 30, 31]. Recently, Chen et al. [3] propose a coarse-to-fine network that performs alignment and fusion sequentially both in the image and feature space. However, these methods can only deal with LFR videos with about 20-60 FPS.
38
+
39
+ HFR video reconstruction. There is plenty of data redundancy in capturing HFR videos directly by commercial high-speed cameras, e.g., the Phatom camera². Building a hybrid system with a high-resolution LFR camera and a low-resolution HFR camera, and utilizing HFR signals to reconstruct a sequence of sharp images from blurred images [2, 49] is a more data-efficient way for HFR video
40
+
41
+ ![](images/98c720d4cece126ecdfa4451934d7d939ae0de158ecc49548ba1c53ceaaf3e0d.jpg)
42
+ Figure 2. (a) The pipeline of the proposed solution. It contains three steps: Step $①$ spike preprocessing (Sec. 3.2), Step $②$ RGB frame processing (Sec. 3.3), and Step $③$ merging into HFR video (Sec. 3.4). Given the spike trains, we firstly estimate the optical flow from them as well as reconstruct spike frames. Secondly, we rectify the uneven brightness with a linear mapping function and use spike-guided deblurring (SG-deblur) to reconstruct sharp color frames. Finally, we use spike-guided frame interpolation (SG-interpolation) to recover the missing colors during $T_{\mathrm{itv}}$ , and reconstruct time-consistent color frames. (b) and (c) show the detailed pipeline of SG-deblur and SG-interpolation.
43
+
44
+ ![](images/a3796d73d2d27d9cb39525bfe02cc066e9e00718bd42123a115adabad161f76c.jpg)
45
+
46
+ reconstruction. Li et al. [26] use a stereo pair of low-resolution HFR and high-resolution LFR cameras to calculate the fast motion and the depth map. Avinash et al. [38] compute optical flows between two existing frames by utilizing the content of auxiliary HFR videos. Jiang et al. [18] recover a sharp video sequence from a motion-blurred image by integrating the visual and temporal knowledge that is contained in the events. Xu et al. [54] achieve real-world event-based deblurring with a self-supervised learning method. Tulyakov et al. [52] propose the Time Lens that utilizes high-speed events to achieve video frame interpolation (VFI). Following that, Time Lens++ [51] further improves the performance. For the reason that real data are absent, Yu et al. [56] propose a weakly supervised method with the help of subpixel attention learning. Although the event-based interpolation realizes HFR video reconstruction [51, 52], the recovered quality of colors is usually unsatisfactory due to that single exposure cannot balance artifacts from noise and blur, we therefore propose to jointly fuse the high-speed spike signals and alternating-exposure RGB frames to achieve high-quality reconstruction.
47
+
48
+ # 3. Approach
49
+
50
+ # 3.1. Overview
51
+
52
+ Our goal is to reconstruct HFR&HDR videos from the binary spike trains $\mathbb{S}(x,y) = \{s(x,y,t)\} (s(x,y,t) = 1$ if the accumulated photons reach a certain threshold, then the accumulator is reset and $s(x,y,t) = 0$ before the next spike is fired [17]) and LFR alternating-exposure RGB frames $\mathbb{B} = \{\mathbf{B}_k\} ^3$ , where $(x,y)$ denote the coordinates of spikes, $t$
53
+
54
+ denotes the timestamp, and $k$ denotes the index of an RGB image in the sequence. As shown in Fig. 2(a), to achieve this goal, we design a pipeline that consists of three steps:
55
+
56
+ Step ①: Spike preprocessing (Sec. 3.2). We estimate the optical flow $\mathbf{F}_i$ and spike frames $\mathbf{I}_i$ from the spike trains:
57
+
58
+ $$
59
+ \mathbf {F} _ {i} (x, y) = \mathcal {S C} \left(s \left(x, y, t _ {i} \rightarrow t _ {i + 1}\right)\right), \tag {1}
60
+ $$
61
+
62
+ $$
63
+ \mathbf {I} _ {i} (x, y) = \int_ {t _ {i} t _ {f} / 2} ^ {t _ {i} + t _ {f} / 2} s (x, y, t) d t, \tag {2}
64
+ $$
65
+
66
+ where $\mathcal{SC}(\cdot)$ denotes optical flow estimation with Hu et al.'s [16] method, $i$ and $t_i$ denote the index and timestamp of spike frames, and $t_f$ is the time window. In Sec. 3.2, we further super-resolve $\mathbf{I}_i$ at the feature space.
67
+
68
+ Step ②: RGB frame preprocessing (Sec. 3.3). For the 60 FPS RGB images captured with alternating exposures, i.e., $t_s, 4t_s$ , and $12t_s$ , we firstly unify the uneven brightness with a linear mapping function. Then we conduct motion deblurring for $4t_s$ and $12t_s$ images. For the $t_s$ images, when $t_s$ is sufficiently short, i.e., 1 ms, we assume the short-exposure image is free from motion blur, and take $t_s$ as the reference time for the motion deblurring. Consequently, we can recover 4 and 12 sharp images from $4t_s$ and $12t_s$ images, respectively. As shown in Fig. 2(b), we use $\mathbf{B}^l$ to denote a blurry image, and the motion deblurring operation can be formulated as: $\{\mathbf{B}_j^l\} = \mathcal{R}(\mathbf{B}^l, \{\mathbf{I}_j | j \in \mathcal{N}_l\}, \mathbf{B}^s)$ , where $j$ is the index of a recovered sharp image, $\mathcal{R}(\cdot)$ is sharp image reconstruction, $\{\mathbf{I}_j | j \in \mathcal{N}_l\}$ is the corresponding spike frames, and $\mathbf{B}^s$ is the nearest short-exposure RGB frame.
69
+
70
+ Step ③: Merging into HFR video (Sec. 3.4). Following Step ②, for the interval time $(T_{\mathrm{itv}})$ that colors are not recorded, we bidirectionally query two nearest sharp RGB
71
+
72
+ ![](images/69d8b311ce36a4530ecf232650dd28197899054c0313144d824fc179e70d4d2f.jpg)
73
+ warping
74
+ Figure 3. For the sake of increasing spatial resolution, we adopt flow-based warping to merge adjacent 5 spike frames.
75
+
76
+ ![](images/cf39b78ad3ae1e85fd6294c3574fe337a9e139e2680711172548c5216ff96c0f.jpg)
77
+
78
+ ![](images/ae9042900c55487e39988804c1314cb26d945fa12eb7432a4fe8c52019a9b1ae.jpg)
79
+
80
+ ![](images/c8d6dedbab383c4cabfdbf4100b648119d41bdf85f6b86c070cb6ca3c7313a25.jpg)
81
+
82
+ ![](images/49bbef8a2cd1ba9b8519bf1120a392f74430894e19e52a34957f5c27c82657ef.jpg)
83
+
84
+ images $\{\mathbf{B}_i^+, \mathbf{B}_i\}$ for each spike frame $\mathbf{I}_i$ , and get the warped images $\{\hat{\mathbf{B}}_i^+, \hat{\mathbf{B}}_i\}$ with optical flow, where $+$ and $-$ denote the forward and backward warping, respectively. In Fig. 2(c), we provide an illustration of the interpolation procedure. Finally, as shown in Fig. 4, we reconstruct time-consistent color frames, and each frame $\mathbf{C}_i$ is generated by merging the spike frame $\mathbf{I}_i$ with $\{\mathbf{C}_i\}_{1}, \hat{\mathbf{B}}_i^+, \hat{\mathbf{B}}_i\}$ with the strong constraint of optical flow.
85
+
86
+ # 3.2. Spike preprocessing
87
+
88
+ The optical flow estimation and spike frame reconstruction using in Eqn. (1) and Eqn. (2) are theoretically, yet the reconstructed frames practically have two issues: Since the integration time $t_f$ is very short, noise is relatively strong; the spatial resolution of the first generation spiking camera (VidarOne [17]) is much lower than the RGB camera. To reduce the noise and increase the spatial resolution, inspired by the burst-based super-resolution [4] and denoising [27] for conventional RGB images, it is feasible to merge a group of adjacent spike frames with the help of spatial alignment. Moreover, thanks to the continuous motion recording capability of spiking cameras, the optical flow [16] estimated from spike trains makes the alignment even more stable than RGB images. As illustrated in Fig. 3, we design a computationally efficient module for spike frames, which is formulated as: $\hat{\mathbf{I}}_i = \{\mathcal{W}_{\mathbf{F}_{j\to i}}(\mathbf{I}_j)|j\in \mathcal{N}_i\}$ , where $\mathcal{W}_{\mathcal{F}_{j\to i}}(\cdot)$ denotes the flow-based warping operation, $\mathcal{N}_i$ denotes a collection of adjacent frames. Then, we feed $\hat{\mathbf{I}}_i$ to a set of convolutional layers, and we use PixelShuffle [47] to increase the spatial resolution while decreasing the channel of features. It should be noted that the method for spike frame reconstruction is not unique, which means users can choose other learning-based methods [61, 62, 64]. However, those deep learning models are relatively heavy, and less efficient as a submodule fitting to our pipeline.
89
+
90
+ # 3.3. RGB image preprocessing
91
+
92
+ RGB linear mapping. Following previous methods for HDR video reconstruction [3, 19, 21], we first unify the brightness of alternating-exposure RGB frames. Since we use an industrial camera (details in Sec. 3.5) that can acquire data without a nonlinear radiometric response function, the linearity of the captured frames is maintained. We find that the brightness of the frames can maintain a linear relationship with the duration of exposure time. Hence we use the global linear mapping to unify the frame brightness: $\alpha \cdot \mathbf{B}_k(x,y)\rightarrow \mathbf{B}_k(x,y)$ , where $\alpha$ denotes a linear scalar.
93
+
94
+ Spike-guided deblurring. The physical model of the blurring process can be simply formulated as the average of a group of sharp images, i.e., $\mathbf{B}^l (x,y) = \frac{1}{N}\sum_{j = 1}^{N}\mathbf{B}_j^l (x,y)$ , where $N$ denotes the number of sharp images. However, due to the limited dynamic range of the RGB camera, that simplified equation does not hold in the clipped regions of real-world long-exposure frames. In general we should have: $\mathbf{B}^l (x,y)\leq \frac{1}{N}\sum_{j = 1}^{N}\mathbf{B}_j^l (x,y)$ . Therefore, for reconstructing a sequence of sharp HDR images from $\mathbf{B}^l$ , we divide it into two sub-tasks: (i) For the well-exposure regions, we use the sharp spike frames to guide motion deblurring; (ii) for the clipped regions where colors are lost, we compensate them with well-retained colors extracted from the adjacent short-exposure image $\mathbf{B}^s$ .
95
+
96
+ Figure 2(b) shows the spike-guided deblurring (SG-deblur) from $\mathbf{B}_l$ ( $\mathbf{B}_l$ may be a middle- or long-exposure image). Similar to Xu et al. [54] that exploit event frames to motion deblurring, we first concatenate $\mathbf{B}_l$ with $\{\mathbf{I}_l^j\}$ , then extract shallow features and increase feature channels with PixelShuffle [47], which is followed by a set of residual dense blocks (RDBs) [60] and a decoder. To make the colors in over-exposure regions be compensated by the adjacent short-exposure RGB image $\mathbf{B}_j^s$ , we warp the short-exposure image with the optical flow estimated from spike trains: $\mathbf{B}_j^s = \mathcal{W}_{\mathbf{F}_{s\rightarrow j}}(\mathbf{B}^s)$ , where $\mathcal{W}_{\mathbf{F}_{s\rightarrow j}}(\cdot)$ denotes the warping operation from timestamp $t_s$ to the timestamp of $t_j$ . Subsequently, we extract features from $\{\mathbf{B}_l^{s\rightarrow j}\}$ and add residual links between them and the decoder. Finally, we obtain a sequence of sharp color images. Note that the SG-deblur for the middle- and long-exposure RGB images share the same architecture while the parameters are not shareable. SG-deblur outputs four images for both $4t_s$ and $12t_s$ frames. For the case of $12t_s$ frame, we interpolate the 4 frames to 12 frames with flow-based warping.
97
+
98
+ Next, we briefly explain the reason why this event-based model [54] can be applied to a spike-based task. Both event streams and spike trains with the high-speed property have been used for motion deblurring and latent frame reconstruction [14,18,54]. It is necessary to convert them to event frames and spike frames, both of which belong to the category of 2D images. But event frames and spike frames have different physical meanings: Pixel values in an event frame reveal the residual (relatively sparse information) between two adjacent frames, while pixel values in a spike frame represent exactly the texture (relatively dense information) of the corresponding frame. Since both event frames and spike frames are 2D images and the spike frames have denser texture information, we can replace event frames in such a model with spike frames, so as to make the solution to the problem more well-posed.
99
+
100
+ # 3.4. Merging into HFR video
101
+
102
+ RGB interpolation. Given each middle- and long-exposure
103
+
104
+ ![](images/f70c457f89317c1a8dfbd75c2c3839b8139e7a74e3c29226b3c581376d4d252a.jpg)
105
+ Figure 4. Network architecture of the CNN-RNN-based merging module for reconstructing HFR&HDR videos from alternating-exposure RGB frames and HFR spike frames. This module outputs HDR color frames in a step-wise manner. We unroll the module for $M$ steps during training.
106
+
107
+ frame, SG-deblur recovers 4 and 12 images. Therefore, the recovered RGB frames have a frame rate of $340^{4}$ FPS. But temporal distribution of them is quite uneven, e.g., there is no recovered color frame interval time $T_{\mathrm{itv}}$ . Fortunately, the spike train contains continuous and dense texture information in the temporal domain. In Step ③, we use the SG-interpolation module to interpolate RGB frames into a sequence of uniformly distributed images. For each spike frame $\mathbf{I}_i$ , we bidirectionally query its two nearest recovered RGB frames $\{\mathbf{B}_i^+, \mathbf{B}_i\}$ and interpolate two color frames $\{\hat{\mathbf{B}}_i^+, \hat{\mathbf{B}}_i\}$ with the optical flow estimated from spike trains. When $\{\hat{\mathbf{B}}_i^+, \hat{\mathbf{B}}_i\}$ are fed into our merging module, they are weighted by a linear coefficient $(\oplus$ in Fig. 4) related to the distance between $t_i$ and $\{t_+, t\}$ , where $\{t_+, t\}$ denote the timestamp of $\{\hat{\mathbf{B}}_i^+, \hat{\mathbf{B}}_i\}$ .
108
+
109
+ Merging module. The aforementioned modules reconstruct coarse HFR video frames, which need to be refined for smoothing over time. We build a CNN-RNN-based HFR&HDR video reconstruction network to merge the spike frames and RGB frames, which is shown in Fig. 4. The merging module consists of three encoders, i.e., $\mathcal{E}_I$ , $\mathcal{E}_B$ , and $\mathcal{E}_C$ , which are respectively designed for feature extraction from the current spike frame $\hat{\mathbf{I}}_i$ , the interpolated RGB images $\{\hat{\mathbf{B}}_i^+, \hat{\mathbf{B}}_i\}$ , and the previously reconstructed image $\mathbf{C}_{i-1}$ . In $\mathcal{E}_I$ , we use PixelShuffle [47] to make the spatial resolution of spike features consistent with RGB features. The extracted features are denoted as $\mathbf{E}_I$ , $\{\mathbf{E}_B, \mathbf{E}_{B+}\}$ , and $\mathbf{E}_{C_i-1}$ , respectively.
110
+
111
+ Considering the spike frames and RGB frames may not be perfectly aligned at pixel level for real-world data, we add deformable convolution layers [6] to improve the robustness to this issue. In order to output flicker-free color frames, we adopt two constraints in the merging module:
112
+
113
+ Table 1. Details of the composition of the dataset (res. is the abbreviation of resolution).
114
+
115
+ <table><tr><td>data</td><td>RGB res.</td><td>spike res.</td><td>train/test</td><td>time</td></tr><tr><td>full-synthetic</td><td>500×800</td><td>250×400</td><td>80/20</td><td>0.1s</td></tr><tr><td>real-synthetic</td><td>600×800</td><td>250×400</td><td>160/40</td><td>0.101s</td></tr><tr><td>real-world</td><td>484×784</td><td>242×392</td><td>-/20</td><td>0.101s</td></tr></table>
116
+
117
+ (i) We add three ConvLSTM layers [48] to feed previous states forward in temporal domain; (ii) we feed $\mathbf{E}_{C_i}$ into the current step and align it with the current features with flow-based warping. We then use a decoder to reversely map deep features to the current output HDR frame $\mathbf{C}_i$ . We achieve the multi-module signal fusion by adding concatenation links between $\{\mathbf{E}_{C_i}$ , $\mathbf{E}_B$ , $\mathbf{E}_{B+}\}$ and the decoder.
118
+
119
+ # 3.5. Implementation Details
120
+
121
+ Due to the setting of our method being different from existing HDR and video frame interpolation methods, there are no suitable datasets for training and testing our method. Therefore, we collect a new one with three components, whose details are summarized in Table 1 and sample images are provided in Fig. 5.
122
+
123
+ Part 1: Full-synthetic data. This part of data is obtained by using the spike simulator proposed by Hu et al. [16]. We render 2000 RGB images with their computer graphics based solution as ground truth and generate 2000 spike planes (0.1 s). Since the photons arriving at the sensor follow Poisson probability distribution [43], we synthesize alternating-exposure 60 FPS RGB frames with a Poisson noise model. For the full synthetic data, we randomly select starting time of each group of training data. We randomly shift the RGB frames within 3 pixels to make the trained model more robust to the misalignment in real-world data.
124
+
125
+ Part 2: Real-synthetic data. To reduce the domain gap between full-synthetic data and real-world data, we design a method to collect real-synthetic (the scenes are real while
126
+
127
+ ![](images/176aa9dfa2ecaf20f74ee48cdeb45fed0d736431be6a2a3e803a4ccf3f70da7d.jpg)
128
+ Figure 5. Example frames from the proposed dataset. Each group shows three alternating-exposure RGB frames (left, from top to bottom rows) and the corresponding spike signals (right).
129
+
130
+ the spike trains are synthetic) data, and we use this part of data to fine-tune our model. The RGB frames are captured with an alternating-exposure mode in slow-motion scenes. Then we synthesize blurry middle-exposure RGB frames by averaging 4 adjacent middle-exposure RGB images, and blurry long-exposure RGB frames are synthesized in a similar way. We synthesize spike trains from ground truth RGB frames with the integrate-and-fire methodology [61].
131
+
132
+ Part 3: Real-world data. We build a Spike-RGB hybrid camera (Fig. 6) to capture real-world data. The system is composed of an industrial camera (Basler acA800-510uc $^5$ ) with alternating exposure capability and a spiking camera [17]. There is a beam splitter in front of the two sensors. We conduct geometric calibration and time synchronization to align bimodal signals collected by them.
133
+
134
+ Loss and training. The SG-deblur module and the merging module reconstruct images in the linear luminance domain, which covers a high dynamic range of pixel values. Following existing methods for HDR reconstruction, for the output images $\mathbf{C}$ , we compress the range of pixel values by applying the following function proposed by Kalantari et al. [20]: $\mathcal{T}(\mathbf{C}) = \log (1 + \mu \mathbf{C}) / \log (1 + \mu)$ , where $\mathcal{T}(\cdot)$ denotes the tone mapping operation and $\mu$ denotes the amount of compression. For these two modules, we employ widely used $l_{1}$ loss, Structure similarity (SSIM) loss [53], and Learned Perceptual Image Patch Similarity (LPIPS) loss [58]. The total loss at step $i$ for both the motion deblurring and merging modules is
135
+
136
+ $$
137
+ \mathcal {L} _ {\text {t o t a l}} (i) = \mathcal {L} _ {l _ {1}} (i) + \beta_ {1} \mathcal {L} _ {\text {S S I M}} (i) + \beta_ {2} \mathcal {L} _ {\text {L P I P S}} (i), \tag {3}
138
+ $$
139
+
140
+ where $\beta_{1} = 1$ and $\beta_{2} = 1$ . For spike-based optical flow estimation using [16], we fine-tune the parameters with full-synthetic data. During training, we resize the RGB images and spike frames to $512 \times 800$ and $256 \times 400$ . We implement our model with PyTorch, set the batch size to 4, and use ADAM optimizer during the training process. We first train the model on full-synthetic data. The SG-deblur module is trained with 50 epochs, before training the merging
141
+
142
+ ![](images/3a9785b5e9c025abd121037f9499ff36cfcf100a90bcaf5ae65255ec856a5815.jpg)
143
+ Figure 6. The prototype of our Spike-RGB imaging system composed of a spiking camera and an RGB camera.
144
+
145
+ module. We unroll the merging module for $M$ steps, and we find $M = 4$ achieves a suitable balance between training time and recovery quality. The total loss for the unrolled $M$ steps is $\mathcal{L}_{\mathrm{merge}} = \sum_{i=1}^{M} \mathcal{L}_{\mathrm{total}}^{\mathrm{M}}(i)$ , where $\mathcal{L}_{\mathrm{total}}^{\mathrm{M}}(i)$ denotes the total loss for the merging module at step $i$ . The initial learning rate for both two modules is 0.001, we decay it to $10^{-6}$ with a linear strategy. For the real-synthetic data, we fine-tune another group of parameters to reduce the gap between synthetic data and real-world data. We use one NVIDIA Tesla A100 for training, and the training procedure consumes about 30 hours.
146
+
147
+ # 4. Experiments
148
+
149
+ # 4.1. Quantitative Evaluation using Synthetic Data
150
+
151
+ Validation on full-synthetic data. Figure 8 shows a group of results on full-synthetic data. We can see that both the flying objects in the short-exposure image and the oversaturated clouds (see the regions marked by boxes) in the long-exposure image are recovered successfully. The results with rich textures and consistent colors show the feasibility of our proposed method.
152
+
153
+ Evaluation on real-synthetic data. To the best of our knowledge, the proposed method is the first framework to reconstruct HFR&HDR videos with the combination of spike trains and alternating-exposure RGB frames. Therefore, it is unfair to compare our method with existing ones, i.e., Kalantari13 [21], Kalantari19 [19], and Chen21 $[3]^{6}$ , which are designed for low frame rate HDR videos.
154
+
155
+ We choose a state-of-the-art HDR video reconstruction method Chen21 [3], which also uses alternating-exposure RGB frames (the closest setup to ours) as a reference. Figure 7 shows the reconstruction results on real-synthetic data of the proposed method and Chen21 [3]. Thanks to the complementary motion information provided by spike trains, the abundant color extracted from alternating-exposure RGB frames, and the accurate textures contained in spike frames, the proposed method is capable of reconstructing rich texture details with less motion blur. For ex
156
+
157
+ ![](images/afa1aa45dd7684f40beb4726479a3aba9c3cd0d96ec6f4bf23778aeeefabdc8f.jpg)
158
+ short
159
+
160
+ ![](images/71cdc16eea019dcd1bd0dd0177ee5024f38cc34f796330d5d1eac074cf09b1e0.jpg)
161
+ middle
162
+ (a)
163
+
164
+ ![](images/6162aa256a287f2f993030fdd823e1c193c5a5945396f892262b2ef1595ff2d3.jpg)
165
+ long
166
+
167
+ ![](images/6eb5166094979d17c7be16571a725ef333c07be4f97c0f8c35d1361d0ff8d59e.jpg)
168
+ short
169
+
170
+ ![](images/18487dec66ddaa2019bfc213e532588aef073461a9d8d2d11543e80ebfbdbd84.jpg)
171
+ middle
172
+ (b)
173
+
174
+ ![](images/865b4f729ea98b545b1620ffe9198a6a878505d1040d35bbaf214b8fe6412abe.jpg)
175
+ long
176
+
177
+ ![](images/fbb712d19253aafc1fd9d055053cde4360d158fb65c53da6d8203063df92c7ec.jpg)
178
+ Figure 7. Visual equality comparison of real-synthetic data between the proposed method and the state-of-the-art HDR video reconstruction method: Chen 21 [3]. We present two sets of results in (a) and (b). Please zoom-in electronic versions for better details, and watch the HFR videos on the project page.
179
+ Figure 8. Validation on the synthetic data.
180
+
181
+ ample, in the long-exposure frame in the first row of (a), the building marked by a yellow box suffers from severe motion blur and overexposure. Chen21 [3] partially recovers the colors of this building, but it fails to remove the blurry artifacts. In the results generated by our method, the edges are sharp and the colors are vivid. In Fig. 7(b), the motions across RGB frames have a very large span, Chen21 [3] can only recover the corresponding LFR videos, while our method can reconstruct an HFR video with smooth motion.
182
+
183
+ We evaluate the reconstructed HDR in terms of PSNR, SSIM, HDR-VDP-2 [32], and HDR-VQM [36]. Table 2 clearly shows that our framework outperforms the state-of-the-art method [3] in all the metrics on the real-synthetic data in the condition of 60 FPS. And we achieve excellent performance in the condition of 1000 FPS. We designed ablation experiments and used them to demonstrate the effectiveness of the modules in our framework. For "w/o I", we simply stack the spike trains with a time window, and upsample them using bilinear interpolation; for "w/o PS", we replace PixelShuffle with a convolutional layer. The two groups of experiments verify the effectiveness of spike frame preprocessing in Step ①. For "w/o F1" and "w/o F2", we remove the flow-based interpolation in the deblurring module and the merging module. The two groups of ex
184
+
185
+ Table 2. Quantitative results and ablation study on our realistic synthetic data. We sample 60 FPS videos from our results for the comparison with Chen21 [3]. $\uparrow (\downarrow)$ indicates larger (smaller) values are better.
186
+
187
+ <table><tr><td colspan="6">Comparison with the state-of-th-art method</td></tr><tr><td>Method</td><td>PSNR↑</td><td>SSIM↑</td><td>HDR-VDP2↑</td><td>HDR-VQM↓</td><td>FPS</td></tr><tr><td>Chen21 [3]</td><td>18.46</td><td>0.697</td><td>27.34</td><td>0.536</td><td rowspan="2">60</td></tr><tr><td>Ours</td><td>30.14</td><td>0.921</td><td>60.14</td><td>0.093</td></tr><tr><td>Chen21 [3]</td><td>/</td><td>/</td><td>/</td><td>/</td><td rowspan="2">1000</td></tr><tr><td>Ours</td><td>24.38</td><td>0.903</td><td>47.79</td><td>0.120</td></tr><tr><td colspan="6">Ablation study</td></tr><tr><td>w/o I</td><td>23.15</td><td>0.886</td><td>46.03</td><td>0.143</td><td rowspan="7">1000</td></tr><tr><td>w/o PS</td><td>23.98</td><td>0.881</td><td>46.47</td><td>0.141</td></tr><tr><td>w/o F1</td><td>19.76</td><td>0.723</td><td>38.95</td><td>0.314</td></tr><tr><td>w/o F2</td><td>18.04</td><td>0.716</td><td>35.89</td><td>0.356</td></tr><tr><td>w/ t-loss</td><td>22.41</td><td>0.864</td><td>43.64</td><td>0.142</td></tr><tr><td>w/o DeConv</td><td>24.31</td><td>0.897</td><td>47.66</td><td>0.127</td></tr><tr><td>w/o DM</td><td>19.01</td><td>0.714</td><td>37.97</td><td>0.338</td></tr></table>
188
+
189
+ periments verify the effectiveness of SC-Flow [16] based interpolation in Steps ② and ③. To further verify the effectiveness of deblurring module, we completely remove it in "w/o DM". For "w/o DeConv", we replace the deformable convolutional layers with traditional convolution layers. For "w/ t-loss", we remove the warping operation on $\mathbf{C}_{i-1}$ and add the temporal consistent loss that is estimated by a pretrained optical flow model [23], which is widely used in video processing [5, 39]. Since the $\mathbf{C}_{i-1}$ is warped by accurate optical flow $\mathbf{F}_{i-1}$ and merged into the current step $i$ , our method fundamentally has a strong temporal consistent constraint for video processing. Thus, our merging module does not need this loss during training.
190
+
191
+ # 4.2. Qualitative Evaluation using Real Data
192
+
193
+ In order to demonstrate the effectiveness of the proposed framework on real-world scenes, we collect 20 sets of real-world data, which are captured by our hybrid camera system shown in Fig. 6. We have compared our slow-motion capability with that of the commercial cameras. As shown in Fig. 9(a), the electric fan is moving at about 40 rounds
194
+
195
+ ![](images/1f268f5b45e6511852f7c10d04b0e95083619bab47aae0ee46893577875ea9d0.jpg)
196
+ Figure 9. Visual quality comparison of real-world data between the proposed method and commercial cameras with the slow-motion capability. In (a), we show two adjacent frames for the video captured by smartphones that have slow-motion capability. The commercial cameras are not calibrated so their results are not strictly aligned with ours. (b) is the comparison with Phantom camera set to 1000 FPS.
197
+
198
+ ![](images/72f0059269a352e5fae6316d26d6d3997fa3fa7268ce9475787b3e6b04ccd5e3.jpg)
199
+ (a)
200
+
201
+ ![](images/87e87953be2aadf893e099d367271fb5ced7b1b9999ee6fc17190a8fe64abca6.jpg)
202
+
203
+ ![](images/12ab643dce1827b18edc878212e142f247aa8711e1c782c9617d11b686395bd6.jpg)
204
+
205
+ ![](images/70699cd7e7248946e810a0c7736901cdbac1ebc3551834ffa17d97f4ef9afb79.jpg)
206
+
207
+ ![](images/c3d3ddfd28c99162d50a607d2239a16a351121102d1537088a159bcc3e287777.jpg)
208
+ (b)
209
+
210
+ ![](images/e93c85761807f3ca93c18962918da723c0e397456cdc1707f1e86ea56554f370.jpg)
211
+ Figure 10. Qualitative visualization of our method in a super fast scene: a balloon bursting. We select 38 frames from our results for showing.
212
+
213
+ per second. The short-exposure image is severely underexposed with less blurry artifacts, and the middle- and long-exposure images have severe blurring and oversaturated artifacts. With the accurate motion and texture information captured by the spiking camera, we have recovered temporally smooth video sequences. Four recovered images are shown for the middle- and long-exposure images. For the videos captured by iPhone 13 and Mi 10, the motions between frames are not continuous. And the electric fan captured by Mi 10 is deformed due to the rolling shutter. In Fig. 9(b), we compare our method with the Phantom<sup>7</sup> camera set to 1000 FPS. Since the exposure time of the Phantom camera is extremely short, it fails to capture regions where scene radiance is weak.
214
+
215
+ # 5. Conclusion
216
+
217
+ We propose an HFR&HDR video reconstruction method with a hybrid camera that is composed of an alternating-exposure RGB sensor and a spiking sensor. Extensive experiments on synthetic and real-world data demonstrate the superior performance of the proposed method.
218
+
219
+ Discussion. (i) For super fast scenes, e.g., a balloon bursting, it is difficult to capture clear motions with a conventional RGB camera at 60 FPS. Therefore, the well-exposed color of the bursting balloon is not captured with the short exposure, which brings challenges to our reconstruction of accurate color. In our results, although the colors are somewhat distorted, we can still recover a smooth video sequence. Once the frame rate of the RGB camera is increased, e.g., 120 FPS, temporally smoother video with more accurate color is expected to be more reliably recovered. (ii) Since QIS [1, 29] share the same imaging model with the spiking camera, our method is ready to be applied to it. We show the simulation in supplementary material.
220
+
221
+ Limitation and future work. Beam splitter is arguable for making a practical system on mobile devices. But when compact design is not a hard constraint, beam splitter has unique advantages in spatial alignment, that is why it is broadly adopted in building a hybrid prototype for HDR [15, 24, 33, 50]. Side-by-side arrangement with parallax unavoidably introduces occlusions and alignment issues, which is a promising direction to explore for our future work. Due to the low spatial resolution $(250\times 400)$ of the current model we use is, we have to super-resolve the spike frames in feature space. If higher-resolution spike signals can be directly obtained, our method can achieve better visual quality. Besides, there is a domain gap between synthetic spike trains and real-captured spike trains since the noise of the spiking camera is more complex than the simulator. For time complexity, our approach is better suited as a post-processing module. The number of parameters is $45.7\mathrm{M}$ and the time cost per frame is 0.371s with a single NVIDIA GeForce RTX 3090 graphics card. We hope to tackle these issues in the future work and achieve higher frame rate reconstruction.
222
+
223
+ # Acknowledgement
224
+
225
+ This work was supported by National Key R&D Program of China (2021ZD0109803), National Natural Science Foundation of China under Grant No. 62088102, 62136001. Yakun Chang was also supported by China Postdoctoral Science Foundation (8206300710).
226
+
227
+ # References
228
+
229
+ [1] Gnanasambandam Abhiram and Chan Stanley H. HDR imaging with quanta image sensors: Theoretical limits and optimal reconstruction. IEEE Transactions on Computational Imaging, 6:1571-1585, 2020. 2, 8
230
+ [2] Moshe Ben-Ezra and Shree K Nayar. Motion deblurring using hybrid imaging. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. 2
231
+ [3] Guanying Chen, Chaofeng Chen, Shi Guo, Zhetong Liang, Kwan-Yee K Wong, and Lei Zhang. HDR video reconstruction: A coarse-to-fine network and a real-world benchmark dataset. In Proc. of International Conference on Computer Vision, pages 2502-2511, 2021. 2, 4, 6, 7
232
+ [4] Wooyeong Cho, Sanghyeok Son, and Dae-Shik Kim. Weighted multi-kernel prediction network for burst image super-resolution. In Proc. of Computer Vision and Pattern Recognition, pages 404-413, 2021. 4
233
+ [5] Jonghyun Choi, Kuk-Jin Yoon, et al. Learning to super resolve intensity images from events. In Proc. of Computer Vision and Pattern Recognition, pages 2768-2776, 2020. 7
234
+ [6] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In Proc. of International Conference on Computer Vision, pages 764-773, 2017. 5
235
+ [7] Paul E Debevec and Jitendra Malik. Recovering high dynamic range radiance maps from photographs. In Proc. of ACM SIGGRAPH, pages 1-10. 2008. 2
236
+ [8] Akshay Dudhane, Syed Waqas Zamir, Salman Khan, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Burst image restoration and enhancement. In Proc. of Computer Vision and Pattern Recognition, pages 5759-5768, 2022. 2
237
+ [9] Gabriel Eilertsen, Joel Kronander, Gyorgy Denes, Rafat K Mantiuk, and Jonas Unger. HDR image reconstruction from a single exposure using deep cnns. ACM Transactions on Graphics, 36(6):1-15, 2017. 2
238
+ [10] Guillermo Gallego, Tobi Delbrück, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew J Davison, Jörg Conradt, Kostas Daniilidis, et al. Event-based vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(1):154-180, 2020. 1
239
+ [11] Clément Godard, Kevin Matzen, and Matt Uytendaele. Deep burst denoising. In Proc. of European Conference on Computer Vision, pages 538-554, 2018. 2
240
+ [12] Yulia Gryaditskaya, Tania Pouli, Erik Reinhard, Karol Myszkowski, and Hans-Peter Seidel. Motion aware exposure bracketing for HDR video. In Computer Graphics Forum, volume 34, pages 119-130. Wiley Online Library, 2015. 2
241
+ [13] Saghi Hajisharif, Joel Kronander, and Jonas Unger. Adaptive dualiso HDR reconstruction. EURASIP Journal on Image and Video Processing, 2015(1):1-13, 2015. 2
242
+ [14] Jin Han, Yixin Yang, Chu Zhou, Chao Xu, and Boxin Shi. Evintsr-net: Event guided multiple latent frames reconstruction and super-resolution. In Proc. of International Conference on Computer Vision, pages 4882-4891, 2021. 2, 4
243
+ [15] Jin Han, Chu Zhou, Peiqi Duan, Yehui Tang, Chang Xu, Chao Xu, Tiejun Huang, and Boxin Shi. Neuromorphic cam-
244
+
245
+ era guided high dynamic range imaging. In Proc. of Computer Vision and Pattern Recognition, pages 1730-1739, 2020. 2, 8
246
+ [16] Liwen Hu, Rui Zhao, Ziluo Ding, Lei Ma, Boxin Shi, Ruiqin Xiong, and Tiejun Huang. Optical flow estimation for spiking camera. In Proc. of Computer Vision and Pattern Recognition, pages 17844-17853, 2022. 2, 3, 4, 5, 6, 7
247
+ [17] Tiejun Huang, Yajing Zheng, Zhaofei Yu, Rui Chen, Yuan Li, Ruiqin Xiong, Lei Ma, Junwei Zhao, Siwei Dong, Lin Zhu, et al. $1000 \times$ faster camera and machine vision with ordinary devices. Engineering, 2022. 1, 3, 4, 6
248
+ [18] Zhe Jiang, Yu Zhang, Dongqing Zou, Jimmy Ren, Jiancheng Lv, and Yebin Liu. Learning event-based motion deblurring. In Proc. of Computer Vision and Pattern Recognition, pages 3320-3329, 2020. 2, 3, 4
249
+ [19] Nima Khademi Kalantari and Ravi Ramamoorthi. Deep HDR video from sequences with alternating exposures. In Computer graphics forum, volume 38, pages 193-205. Wiley Online Library, 2019. 2, 4, 6
250
+ [20] Nima Khademi Kalantari, Ravi Ramamoorthi, et al. Deep high dynamic range imaging of dynamic scenes. ACM Transactions on Graphics, 36(4):144-1, 2017. 2, 6
251
+ [21] Nima Khademi Kalantari, Eli Shechtman, Connelly Barnes, Soheil Darabi, Dan B Goldman, and Pradeep Sen. Patch-based high dynamic range video. ACM Transactions on Graphics, 32(6):202-1, 2013. 2, 4, 6
252
+ [22] Sing Bing Kang, Matthew Uytendaele, Simon Winder, and Richard Szeliski. High dynamic range video. ACM Transactions on Graphics, 22(3):319-325, 2003. 2
253
+ [23] Wei-Sheng Lai, Jia-Bin Huang, Oliver Wang, Eli Shechtman, Ersin Yumer, and Ming-Hsuan Yang. Learning blind video temporal consistency. In Proc. of European Conference on Computer Vision, pages 170-185, 2018. 7
254
+ [24] Byungju Lee and Byung Cheol Song. Multi-image high dynamic range algorithm using a hybrid camera. Signal Processing: Image Communication, 30:37-56, 2015. 2, 8
255
+ [25] Juan Antonio Lénero-Bardallo, Teresa Serrano-Gotarredona, and Bernabé Linares-Barranco. A 3.6 $\mu$ s latency asynchronous frame-free event-driven dynamic-vision-sensor. IEEE Journal of Solid-State Circuits, 46(6):1443-1455, 2011. 1
256
+ [26] Feng Li, Jingyi Yu, and Jinxiang Chai. A hybrid camera for motion deblurring and depth map super-resolution. In Proc. of Computer Vision and Pattern Recognition, pages 1-8. IEEE, 2008. 3
257
+ [27] Ziwei Liu, Lu Yuan, Xiaou Tang, Matt Uytendaele, and Jian Sun. Fast burst images denoising. ACM Transactions on Graphics, 33(6):1-9, 2014. 4
258
+ [28] Kede Ma, Hui Li, Hongwei Yong, Zhou Wang, Deyu Meng, and Lei Zhang. Robust multi-exposure image fusion: A structural patch decomposition approach. IEEE Transactions on Image Processing, 26(5):2519-2532, 2017. 2
259
+ [29] Ulku Arin C Bruschini Claudio Charbon Edoardo Ma Sizhuo, Gupta Shantanu and Gupta Mohit. Quanta burst photography. ACM Transactions on Graphics, 39(4):79-1, 2020. 8
260
+
261
+ [30] Stephen Mangiat and Jerry Gibson. High dynamic range video with ghost removal. In Applications of Digital Image Processing XXXIII, volume 7798, pages 307-314. SPIE, 2010. 2
262
+ [31] Stephen Mangiat and Jerry Gibson. Spatially adaptive filtering for registration artifact removal in HDR video. In Proc. of International Conference on Image Processing, pages 1317-1320. IEEE, 2011. 2
263
+ [32] Rafal Mantiuk, Kil Joong Kim, Allan G Rempel, and Wolfgang Heidrich. HDR-VDP-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Transactions on Graphics, 30(4):1-14, 2011. 7
264
+ [33] Morgan McGuire, Wojciech Matusik, Hanspeter Pfister, Billy Chen, John F Hughes, and Shree K Nayar. Optical splitting trees for high-precision monocular imaging. IEEE Computer Graphics and Applications, 27(2):32-42, 2007. 2, 8
265
+ [34] Tom Mertens, Jan Kautz, and Frank Van Reeth. Exposure fusion. In Pacific Conference on Computer Graphics and Applications, pages 382-390, 2007. 2
266
+ [35] Nico Messikommer, Stamatios Georgoulis, Daniel Gehrig, Stepan Tulyakov, Julius Erbach, Alfredo Bochicchio, Yuanyou Li, and Davide Scaramuzza. Multi-Bracket high dynamic range imaging with event cameras. In Proc. of Computer Vision and Pattern Recognition, pages 547–557, 2022. 2
267
+ [36] Manish Narwaria, Matthieu Perreira Da Silva, and Patrick Le Callet. HDR-VQM: An objective quality measure for high dynamic range video. Signal Processing: Image Communication, 35:46-60, 2015. 7
268
+ [37] Shree K Nayar and Tomoo Mitsunaga. High dynamic range imaging: Spatially varying pixel exposures. In Proc. of Computer Vision and Pattern Recognition, volume 1, pages 472-479. IEEE, 2000. 2
269
+ [38] Avinash Paliwal and Nima Khademi Kalantari. Deep slow motion video reconstruction with hybrid imaging system. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(7):1557-1569, 2020. 3
270
+ [39] Henri Rebecq, René Ranftl, Vladlen Koltun, and Davide Scaramuzza. High speed and high dynamic range video with an event camera. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(6):1964-1980, 2019. 7
271
+ [40] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In Proc. of Computer Vision and Pattern Recognition, pages 779-788, 2016. 1
272
+ [41] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. Proc. of Advances in Neural Information Processing Systems, 28, 2015. 1
273
+ [42] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015. 2
274
+ [43] Yash Sanghvi, Abhiram Gnanasambandam, and Stanley H Chan. Photon limited non-blind deblurring using algorithm
275
+
276
+ unrolling. IEEE Transactions on Computational Imaging, 2022.5
277
+ [44] Marcel Santana Santos, Tsang Ing Ren, and Nima Khademi Kalantari. Single image HDR reconstruction using a cnn with masked features and perceptual loss. arXiv preprint arXiv:2005.07335, 2020. 2
278
+ [45] Pradeep Sen, Nima Khademi Kalantari, Maziar Yaesoubi, Soheil Darabi, Dan B Goldman, and Eli Shechtman. Robust patch-based HDR reconstruction of dynamic scenes. ACM Transactions on Graphics, 31(6):203-1, 2012. 2
279
+ [46] Richard Shaw, Sibi Catley-Chandar, Ales Leonardis, and Eduardo Perez-Pellitero. HDR reconstruction from bracketed exposures and events. arXiv preprint arXiv:2203.14825, 2022. 2
280
+ [47] Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proc. of Computer Vision and Pattern Recognition, pages 1874-1883, 2016. 4, 5
281
+ [48] Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. Proc. of Advances in Neural Information Processing Systems, 28, 2015. 5
282
+ [49] Yu-Wing Tai, Hao Du, Michael S Brown, and Stephen Lin. Image/video deblurring using a hybrid camera. In Proc. of Computer Vision and Pattern Recognition, pages 1-8, 2008. 2
283
+ [50] Michael D Tocci, Chris Kiser, Nora Tocci, and Pradeep Sen. A versatile HDR video production system. ACM Transactions on Graphics, 30(4):1-10, 2011. 2, 8
284
+ [51] Stepan Tulyakov, Alfredo Bochicchio, Daniel Gehrig, Stamatios Georgoulis, Yuanyou Li, and Davide Scaramuzza. Time Lens++: Event-based frame interpolation with parametric non-linear flow and multi-scale fusion. In Proc. of Computer Vision and Pattern Recognition, pages 17755-17764, 2022. 2, 3
285
+ [52] Stepan Tulyakov, Daniel Gehrig, Stamatios Georgoulis, Julius Erbach, Mathias Gehrig, Yuanyou Li, and Davide Scaramuzza. Time Lens: Event-based video frame interpolation. In Proc. of Computer Vision and Pattern Recognition, pages 16155-16164, 2021. 2, 3
286
+ [53] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600-612, 2004. 6
287
+ [54] Fang Xu, Lei Yu, Bishan Wang, Wen Yang, Gui-Song Xia, Xu Jia, Zhendong Qiao, and Jianzhuang Liu. Motion deblurring with real events. In Proc. of International Conference on Computer Vision, pages 2583-2592, 2021. 3, 4
288
+ [55] Qingsen Yan, Lei Zhang, Yu Liu, Yu Zhu, Jinqiu Sun, Qinfeng Shi, and Yanning Zhang. Deep HDR imaging via a non-local network. IEEE Transactions on Image Processing, 29:4308-4322, 2020. 2
289
+ [56] Zhiyang Yu, Yu Zhang, Deyuan Liu, Dongqing Zou, Xijun Chen, Yebin Liu, and Jimmy S Ren. Training weakly supervised video frame interpolation with events. In Proc. of
290
+
291
+ International Conference on Computer Vision, pages 14589-14598, 2021. 3
292
+ [57] Cheng Zhang, Shaolin Su, Yu Zhu, Qingsen Yan, Jinqiu Sun, and Yanning Zhang. Exploring and evaluating image restoration potential in dynamic scenes. In Proc. of Computer Vision and Pattern Recognition, pages 2067-2076, 2022. 1
293
+ [58] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proc. of Computer Vision and Pattern Recognition, pages 586-595, 2018. 6
294
+ [59] Xiang Zhang and Lei Yu. Unifying motion deblurring and frame interpolation with events. In Proc. of Computer Vision and Pattern Recognition, pages 17765-17774, 2022. 2
295
+ [60] Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. Residual dense network for image super-resolution. In Proc. of Computer Vision and Pattern Recognition, pages 2472-2481, 2018. 4
296
+ [61] Jing Zhao, Ruiqin Xiong, Hangfan Liu, Jian Zhang, and Tiejun Huang. Spk2Imgnet: Learning to reconstruct dynamic scene from continuous spike stream. In Proc. of Computer Vision and Pattern Recognition, pages 11996-12005, 2021. 1, 4, 6
297
+ [62] Yajing Zheng, Lingxiao Zheng, Zhaofei Yu, Boxin Shi, Yonghong Tian, and Tiejun Huang. High-speed image reconstruction through short-term plasticity for spiking cameras. In Proc. of Computer Vision and Pattern Recognition, pages 6358-6367, 2021. 1, 4
298
+ [63] Lin Zhu, Siwei Dong, Tiejun Huang, and Yonghong Tian. A retina-inspired sampling method for visual texture reconstruction. In Proc. of International Conference on Multimedia and Expo. 1
299
+ [64] Lin Zhu, Siwei Dong, Jianing Li, Tiejun Huang, and Yonghong Tian. Retina-like visual image reconstruction via spiking neural model. In Proc. of Computer Vision and Pattern Recognition, pages 1438-1446, 2020. 1, 4
300
+ [65] Yunhao Zou, Yinqiang Zheng, Tsuyoshi Takatani, and Ying Fu. Learning to reconstruct high speed and high dynamic range videos from events. In Proc. of Computer Vision and Pattern Recognition, pages 2024-2033, 2021. 1
1000fpshdrvideowithaspikergbhybridcamera/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58ac6679580481fd53de01c21b689e070efc79216f9eb92a1fc2f684a8937b0f
3
+ size 778787
1000fpshdrvideowithaspikergbhybridcamera/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1c9ead3a8f6d068356920828f6696d7e44dbb628a7ce05c48f8773659bf9911
3
+ size 466231
1vs100parameterefficientlowrankadapterfordensepredictions/3b75c6c9-33bc-4e41-9df3-2e14ac85ef59_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3f82356cd9577576ad4ce35b1a7a0e959f18e601ec1ae9c8b45cf15d53042b3
3
+ size 79112
1vs100parameterefficientlowrankadapterfordensepredictions/3b75c6c9-33bc-4e41-9df3-2e14ac85ef59_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d58ee5640bdae5c09cff0b40bab0aed63b370d433617909800fd933aed72bce9
3
+ size 101463
1vs100parameterefficientlowrankadapterfordensepredictions/3b75c6c9-33bc-4e41-9df3-2e14ac85ef59_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fc9f6430cecf54e04bc50637a348599005eb52434349086aeebda29c091e51e
3
+ size 996453
1vs100parameterefficientlowrankadapterfordensepredictions/full.md ADDED
@@ -0,0 +1,359 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 1% VS 100%: Parameter-Efficient Low Rank Adapter for Dense Predictions
2
+
3
+ Dongshuo Yin $^{1,2,\dagger}$ , Yiran Yang $^{1,2,\dagger}$ , Zhechao Wang $^{1,2}$ , Hongfeng Yu $^{1}$ , Kaiwen Wei $^{1,2}$ , Xian Sun $^{1,2,*}$ $^{1}$ Key Laboratory of Network Information System Technology, Aerospace Information Research Institute, Chinese Academy of Sciences
4
+ $^{2}$ School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences
5
+
6
+ {yindongshuo19, yangyiran19, wangzhechao21, weikaiwen19}@mails.ucas.ac.cn {yuhf, sunxian}@aircas.ac.cn
7
+
8
+ # Abstract
9
+
10
+ Fine-tuning large-scale pre-trained vision models to downstream tasks is a standard technique for achieving state-of-the-art performance on computer vision benchmarks. However, fine-tuning the whole model with millions of parameters is inefficient as it requires storing a samesized new model copy for each task. In this work, we propose LoRand, a method for fine-tuning large-scale vision models with a better trade-off between task performance and the number of trainable parameters. LoRand generates tiny adapter structures with low-rank synthesis while keeping the original backbone parameters fixed, resulting in high parameter sharing. To demonstrate LoRand's effectiveness, we implement extensive experiments on object detection, semantic segmentation, and instance segmentation tasks. By only training a small percentage (1% to 3%) of the pre-trained backbone parameters, LoRand achieves comparable performance to standard fine-tuning on COCO and ADE20K and outperforms fine-tuning in low-resource PASCAL VOC dataset.
11
+
12
+ # 1. Introduction
13
+
14
+ With the rapid development of computer vision, parameters in deep models are surging. Giant models need to be trained with massive resources to achieve superior performance [3, 17, 47, 58], which is often unavailable to many academics and institutions. "Pretrain & Finetuning" paradigm is widely used to alleviate this dilemma. Teams with sufficient computation resources utilise enormous datasets [2, 9, 40, 50] to train superior backbones [4, 32, 40, 48] and optimise the models with ideal performances. Models pretrained in this way usually have a su
15
+
16
+ ![](images/284244850d213821e678546c56bd87e129870b5533c18d88c83c8caed88f03ff.jpg)
17
+ Figure 1. Comparisons of trainable backbone parameters between our methods (red) and fine-tuning (black). In COCO, we achieve advanced performances and outperform most existing backbones with only $0.9\sim 2.5\mathrm{M}$ new backbone parameters (Cascade-RCNN is employed as the detector). The fine-tuning paradigm produces massive redundant backbone parameters, whereas our approach saves over $97\%$ of hardware resources with competitive performances. The sizes of the circles intuitively compare the number of trainable parameters.
18
+
19
+ perior understanding of homogeneous data. After that, researchers with limited computational resources can transfer the understanding capabilities of the pre-trained models to downstream tasks with promising performances by finetuning [1,26,46,53].
20
+
21
+ However, the fine-tuned model will produce a new set of parameters as large as the pre-trained model. New parameters are independent of the pre-trained models and unshareable, which are very hardware intensive for cloud service providers [23, 49]. Figure 1 compares the parameter quantities of some remarkable backbones and their performances on the COCO [28] dataset. Recent advances in natural language processing (NLP) [30, 38] show that large pre-trained models trained with rich data have strong gener
22
+
23
+ ![](images/b58c6b179031f202b13762129589750230c30d575d80af5f850f0a002de939e8.jpg)
24
+ Swin-Transformer Block
25
+
26
+ ![](images/7edbd25abc95b7d7da6fcc332f29fee39ad6e9b53dd1dfb97cf12b8d90b381de.jpg)
27
+ LoRand Layer
28
+ Figure 2. Architecture of the adapter module and its integration with the Transformer. Left: We add two LoRand structures to each SwinBlock located behind the W/SW-MSA and MLP structures respectively. Right: LoRand contains two Multi-branch low-rank projections and nonlinearity. We include skip-connection to LoRand to enhance its robustness.
29
+
30
+ alisability, which means most parameters in the pre-trained models can be shared with the new tasks [22, 36, 37, 44, 59]. Moreover, recent literature demonstrates that the feature understanding of pre-trained models could be reduced when they are fine-tuned in low-resource situations [12, 36]. To tackle these issues, NLP researchers propose two new training paradigms based on pre-trained models: Adapter Tuning [22] and Prompt Tuning [30], both of which tune the new models by fixing the pre-trained parameters and adding a few trainable structures (less than $10\%$ of the backbone). These paradigms create a new buzz in NLP and achieve impressive performances which can be competitive with finetuning [12, 22, 30, 36-38, 44, 59]. Advances in NLP also shed new light on computer vision. Jia et al. [24] propose Visual Prompt Tuning (VPT) and demonstrate that VPT can outperform fine-tuning on image classification tasks by training a small number of trainable parameters. Nevertheless, VPT shows weakness on more challenging dense predictions like semantic segmentation compared with finetuning [24].
31
+
32
+ To find a parameter-efficient paradigm with promising performance in computer vision, we explore the potential of Adapter Tuning for visual dense predictions. We employ the advanced Swin Transformer [32] trained with ImageNet-22K [9] as the pre-trained model. After that, we add bottleneck adapter structures [22] behind each SwinBlock and freeze the original backbone parameters when training, but this approach cannot achieve comparable performance to fine-tuning as mentioned in [24]. In the experi
33
+
34
+ periments, we find that the models perform better with sparser adapter structures. To improve the performance of Adapter Tuning, we propose Low-Rank Adapter (LoRand) to reduce the adapter parameters, as shown in Figure 2. LoRand sparsely parameterizes the matrices in adapters by low-rank synthesis. Specifically, the projection matrix of the fully-connected layer (FC) in LoRand is a product of multiple low-rank matrices, which reduces FC parameters by more than $80\%$ . We implement extensive experiments on object detection (PASCAL VOC [14]), semantic segmentation (ADE20K [62]), and instance segmentation (MS COCO [28]) to verify the capability of LoRand. Experimental results show that LoRand-Tuning is comparable to fine-tuning on multiple tasks with only $1.8\%$ to $2.8\%$ new backbone parameters, which suggests that the pre-trained backbone parameters can be fully shared. More interestingly, our method completely outperforms fine-tuning on the PASCAL VOC dataset, illustrating that LoRand-Tuning can reduce the impairment of fine-tuning on pre-trained models in low-resource configurations. Our method demonstrates that the LoRand-Tuning paradigm can substantially save storage resources and achieve competitive performances on most dense prediction tasks. In summary, our contributions are three-fold:
35
+
36
+ - We demonstrate that visual pre-trained models are highly generalisable and shareable. With our training methods, new tasks require only a few trainable parameters to achieve performances comparable to finetuning, which can save massive hardware resources.
37
+ - We propose the LoRand structure for sparser adapters based on low-rank synthesis. We demonstrate that the backbone parameters in fine-tuning are highly redundant, which can be replaced by $1.8\%$ to $2.8\%$ additional parameters in LoRand.
38
+ - Extensive experiments on object detection, semantic segmentation, and instance segmentation show that LoRand-Tuning can achieve remarkable performances and reduce massive new parameters in challenging dense prediction tasks.
39
+
40
+ # 2. Related Work
41
+
42
+ # 2.1. Training Paradigms in NLP
43
+
44
+ Computer vision has been continuously inspired by NLP in recent years, including the visual transformer series [5,13,29,32] and self-supervised MAE series [15,19,60]. In fact, NLP is leading new training trends different from finetuning. Fine-tuning produces a new parameter set for each new task, which is parametrically inefficient for plenty of linguistic tasks [22,30]. To solve this problem, [30] and [22] have proposed "Prompt Tuning" and "Adapter Tuning" respectively, both of which fix all parameters of the backbone
45
+
46
+ and plug a few tiny trainable structures (less than $10\%$ of the backbone) to adapt the pre-trained model to the new tasks. "Prompt tuning" adds learnable parameters (also known as prompts) to the input or intermediate layers to change the input space of the new tasks. "Prompts" can motivate the model to remember knowledge learned in the previous tasks. "Adapter tuning" adds learnable bottleneck structures after each block to connect the pre-trained model with new tasks. Adapter and prompt demonstrate the coexistence of parameter efficiency and high performances in NLP, stimulating studies in CV. [24] proposes Visual Prompt Tuning (VPT) for image classification and semantic segmentation, but the performance of VPT on semantic segmentation is still far from fine-tuning. This phenomenon motivates us to explore whether adapter tuning can bring a new paradigm in computer vision with fewer parameters and better performances. In this work, we try to explore parameter-efficient and high-performance adapter structures.
47
+
48
+ # 2.2. Adapter Tuning
49
+
50
+ Adapters have been widely studied in NLP. Houlsby et al. [22] first add a bottleneck adapter structure to the transformer blocks and fix the original backbone, which achieves comparable performances to fine-tuning. Figure 3 illustrates the differences between fine-tuning and adaptertuning. [37,44,59] further reduce parameters in the adapter with closer performances to fine-tuning. [18,34,39] outperform fine-tuning on low-resource tasks, demonstrating that more parameters may not improve performance when finetuning pre-trained models [36]. In computer vision, [41] add convolutional adapters to the ResNet [20] and obtain competitive results in image classification. Adapter concept has also been applied in multimodal [33], vision-and-language [51], and domain adaptation [56], but these methods are only applicable under specific conditions. [7, 21, 25, 31] investigate the potential of adapter-tuning for visual classification. [8] apply the adapter structure to visual dense predictions without fixing any original parameters, which indeed trades more parameters for better performances.
51
+
52
+ # 2.3. Low-rank Approximation
53
+
54
+ The low-rank approximation uses multiple low-dimensional tensors to approximate a larger tensor with higher dimensions. Tensor dimensions and sizes in machine learning are very large, so low-rank approximations are widely used in face recognition [61], distributed training [54], transfer learning [11], and cross-domain [10]. A $b \times c$ matrix $M$ can be approximated with $N$ low-rank matrices $Q$ by the following equation:
55
+
56
+ $$
57
+ M _ {b \times c} = \prod_ {i = 1} ^ {N} Q _ {r _ {i} \times s _ {i}}, \tag {1}
58
+ $$
59
+
60
+ ![](images/076b68aad349b00b6a3bfa1feb3c01031b3a22e132f2f3e1d5dbafcabaff3fd7.jpg)
61
+ Figure 3. Comparison between Adapter-Tuning and Fine-Tuning paradigms. Fine-Tuning tunes ( $\mathcal{A}$ ) all parameters delivered by the pre-trained model. Adapter-Tuning freezes ( $\mathcal{A}$ ) all structures and parameters in the pre-trained model and only trains ( $\mathcal{A}$ ) the additional parameters in adapters. Parameters in the decoder and head are trainable in both paradigms.
62
+
63
+ where $N$ has different values depending on the approximation methods, we implement low-rank approximation of the adapter matrices by heuristic learning.
64
+
65
+ # 3. Method
66
+
67
+ In this section, we will elaborate on the proposed low-rank adapter (LoRand) in three parts: adapter tuning paradigm, LoRand, and parameter analysis.
68
+
69
+ # 3.1. Adapter Tuning Paradigm
70
+
71
+ For dataset $D = \{(x_{i},y_{i})\}_{i = 1}^{N}$ , fine-tuning calculates the loss between inference results and labels according to the formula:
72
+
73
+ $$
74
+ L (D, \theta) = \sum_ {i = 1} ^ {N} \operatorname {l o s s} \left(f _ {\theta} \left(x _ {i}\right), y _ {i}\right), \tag {2}
75
+ $$
76
+
77
+ where $f_{\theta}$ denotes the network forward function and loss represents the loss function. After that, $\theta$ is optimized through
78
+
79
+ $$
80
+ \theta \leftarrow \underset {\theta} {\arg \min } L (D, \theta). \tag {3}
81
+ $$
82
+
83
+ In adapter tuning paradigm, parameters consist of two parts, including parameters in adapter $\theta_{A}$ and parameters in the original architecture $\theta$ . Here, $\theta$ is further divided into frozen part $\theta_{F}$ and trainable part $\theta_{T}$ , noted as $\theta = \{\theta_{F},\theta_{T}\}$ . Let $\Omega$ be all the trainable parameters, then $\Omega = \{\theta_{A},\theta_{T}\}$ . The loss function and optimization formula in adapter can be written as:
84
+
85
+ $$
86
+ L \left(D, \theta_ {F}, \Omega\right) = \sum_ {i = 1} ^ {N} \operatorname {l o s s} \left(f _ {\theta_ {F}, \Omega} \left(x _ {i}\right), y _ {i}\right), \tag {4}
87
+ $$
88
+
89
+ ![](images/1121da732dac8d417df762986ffe346f6c0ca9d44752793fbadaa91c64385b68.jpg)
90
+ Figure 4. Left: Multi-branch projection in LoRand. The down-projection $W^{D}$ and up-projection $W^{U}$ matrices are the summation of $\alpha$ branches $W_{1}^{D}(W_{1}^{U})\ldots W_{\alpha}^{D}(W_{\alpha}^{U})$ . $K_{i}$ in $i$ -th branch is shared between $W_{i}^{D}$ and $W_{i}^{U}$ . All the $P, Q,$ and $K$ are trainable, while all the $W$ matrices are calculated. Right: Comparisons of the same-sized projection matrices between LoRand and Adapter. $(m,n)$ in the table are typical values in SwinBlocks. LoRand has far fewer parameters than Adapter. With the same projection dimension, LoRand saves over 80% parameters of the Adapter in Swin Transformers. $(\alpha ,\beta)$ here are (2,8), the same as the experiments.
91
+
92
+ <table><tr><td>(m,n)</td><td>PLoRand</td><td>PAdapter</td><td>%</td></tr><tr><td>(96,48)</td><td>4736</td><td>9216</td><td>51.39%</td></tr><tr><td>(192,96)</td><td>9344</td><td>36864</td><td>25.35%</td></tr><tr><td>(384,192)</td><td>18560</td><td>147456</td><td>12.59%</td></tr><tr><td>(768,384)</td><td>36992</td><td>589824</td><td>6.27%</td></tr><tr><td>……</td><td>……</td><td>……</td><td>……</td></tr></table>
93
+
94
+ $$
95
+ \Omega \leftarrow \underset {\Omega} {\arg \min } L (D, \theta_ {F}, \Omega). \tag {5}
96
+ $$
97
+
98
+ # 3.2. LoRand
99
+
100
+ Before introducing LoRand, we first review the existing adapter structure. Conventional adapters are bottleneck structures containing a down-projection, an up-projection, and a non-linear activation function. Besides, adapters ensure the robustness of the model by adding residual [20] structures. Adapter layer can be formulated as follows:
101
+
102
+ $$
103
+ A ^ {l} = U ^ {l} \left(G e L U (D ^ {l} (x))\right) + x, \tag {6}
104
+ $$
105
+
106
+ where $U^l$ and $D^l$ represent the up and down projections in the $l$ -th adapter layer, and GeLU is the activation function. It is clear that the parameters in adapter come from the projections. The projection process can be written as:
107
+
108
+ $$
109
+ y = W x + b, \tag {7}
110
+ $$
111
+
112
+ which means most adapter parameters are in $W$ .
113
+
114
+ To reduce the adapter parameters, we propose a low-rank adapter (LoRand) structure to replace the $W$ in the projection structures. Figure 2 shows the simplified structure of LoRand. Here we approximate not a specific matrix $W$ but an ideal matrix $W_{best}$ that can transform the feature space of the pre-trained model into new tasks by heuristic learning. The approximation matrix $\hat{W}$ has the same size as $W$ , but the low-rank design makes $\hat{W}$ have far fewer free degrees than a common $W$ .
115
+
116
+ Specifically, we synthesize each $W$ by multiplying three low-rank matrices $P \in \mathbb{R}^{\beta \times m}$ , $K \in \mathbb{R}^{\beta \times \beta}$ , $Q \in \mathbb{R}^{\beta \times n}$
117
+
118
+ that is:
119
+
120
+ $$
121
+ W = P ^ {T} K Q, \tag {8}
122
+ $$
123
+
124
+ where $\beta \ll \min(m, n)$ ensuring that $P$ and $Q$ are low-rank matrices. $K$ can be regarded as a kernel matrix that controls the parameter size of LoRand.
125
+
126
+ After that, we add multi-branch structures to LoRand to increase the robustness and stability of low-rank matrices, which is inspired by MoE [43] and adaboost [45,52]. Every $W$ consists of $\alpha$ branches, that is:
127
+
128
+ $$
129
+ W = \sum_ {i = 1} ^ {\alpha} W _ {i} = \sum_ {i = 1} ^ {\alpha} P _ {i} ^ {T} K _ {i} Q _ {i}. \tag {9}
130
+ $$
131
+
132
+ In addition, we share the kernel matrix $K$ of the two projection layers within each branch. We hope the sharing mechanism can promote the coherence of two projection layers during training process. Besides, the shared $K$ also slightly reduces the number of LoRand parameters. Up to now, the $W^{U}$ and $W^{D}$ in a complete LoRand structure can be represented as:
133
+
134
+ $$
135
+ W ^ {U} = \sum_ {i = 1} ^ {\alpha} W _ {i} ^ {U} = \sum_ {i = 1} ^ {\alpha} \left(P _ {i} ^ {U}\right) ^ {T} K _ {i} Q _ {i} ^ {U}, \tag {10}
136
+ $$
137
+
138
+ $$
139
+ W ^ {D} = \sum_ {i = 1} ^ {\alpha} W _ {i} ^ {D} = \sum_ {i = 1} ^ {\alpha} \left(P _ {i} ^ {D}\right) ^ {T} K _ {i} Q _ {i} ^ {D}, \tag {11}
140
+ $$
141
+
142
+ where $K_{i}$ is shared in $W^{U}$ and $W^{D}$ . Figure 4 presents the detailed designs of the multi-branch projection.
143
+
144
+ # 3.3. Parameter Analysis
145
+
146
+ In this section, we will compare the parameters of Lo-Rand and typical adapter [22] with the same size of projection matrix.
147
+
148
+ Adapter Let $m$ be the input dimension of the adapter and $n$ be the middle layer dimension after down projection. Then the number of parameters in each adapter is $2mn$ (ignoring the few biases). In general, adapter tuning places two adapter modules in each block, so the space complexity of all adapter parameters in $\gamma$ blocks can be written as:
149
+
150
+ $$
151
+ O (4 \gamma m n). \tag {12}
152
+ $$
153
+
154
+ LoRand According to section 3.2, each $W$ contains $\alpha$ sets of $\{P,Q,K\}$ , that is:
155
+
156
+ $$
157
+ \alpha \left(m \beta + \beta^ {2} + n \beta\right). \tag {13}
158
+ $$
159
+
160
+ Each LoRand consists of two $W$ and $\alpha$ shared $K$ , so the parameter quantity of each LoRand is:
161
+
162
+ $$
163
+ 2 \alpha (m \beta + \beta^ {2} + n \beta) - \alpha \beta^ {2} = 2 \alpha \beta (m + n + \beta / 2). \tag {14}
164
+ $$
165
+
166
+ Each block has two LoRand structures, so the number of parameters in $\gamma$ blocks is:
167
+
168
+ $$
169
+ 4 \alpha \beta \gamma (m + n) + 2 \alpha \beta^ {2} \gamma . \tag {15}
170
+ $$
171
+
172
+ As $\alpha, \beta, \gamma \ll \min(m, n)$ , the space complexity here can be written as:
173
+
174
+ $$
175
+ O \left(4 \alpha \beta \gamma (m + n)\right). \tag {16}
176
+ $$
177
+
178
+ Comparison between Formulas 12 and 16 can be simplified as:
179
+
180
+ $$
181
+ O (m n), \tag {17}
182
+ $$
183
+
184
+ and
185
+
186
+ $$
187
+ O (\alpha \beta (m + n)). \tag {18}
188
+ $$
189
+
190
+ Given that $\alpha, \beta \ll \min(m, n)$ , the space complexity of LoRand is far lower than the typical adapter. The table in Figure 4 illustrates that LoRand saves most Adapter parameters with the same projecting dimension.
191
+
192
+ # 4. Experiments
193
+
194
+ We evaluate LoRand on multiple dense prediction tasks, including object detection, semantic segmentation, and instance segmentation. We also evaluate LoRand under low-resource conditions. We first describe our experimental setup in Section 4.1, including pre-trained backbones, baselines, LoRand settings, and downstream tasks. Then we present the main results of three benchmarks in Section 4.2. We also implement ablation study in Section 4.3 to investigate the impact of structural settings in LoRand.
195
+
196
+ # 4.1. Experimental Setup
197
+
198
+ Pretrained Backbones We conduct experiments on the advanced Swin Transformer [32] architectures. All backbones in this section are pre-trained by ImageNet-22k [9]. Pre-trained models are provided by OpenMMLab [6].
199
+
200
+ Baselines We compare LoRand with three other common training methods:
201
+
202
+ (a) FULL: update all parameters in the architecture.
203
+ (b) FIXED: fix pre-trained parameters in Swin and train other parts of the architecture (neck, head).
204
+ (c) ADAPTER: add two trainable adapter structures in each SwinBlock following [22], and freeze other parts of the backbone. We evaluate two forms of adapter with different middle layer dimensions $(D_{ML})$ :
205
+ - ADAPTER-B: $D_{ML}$ is a half of input dimension.
206
+ - ADAPTER-T: $D_{ML}$ is a quarter of input dimension.
207
+
208
+ LoRand Settings We conducted experiments on three Lo-Rand variants, which have different branch numbers $\alpha$ and kernel matrix dimensions $\beta$ .
209
+
210
+ - LoRand: $\alpha = 2$ , $\beta = 8$ (Standard).
211
+ - LoRand+: $\alpha = 4, \beta = 8$ .
212
+ - LoRand++: $\alpha = 4, \beta = 16$ .
213
+
214
+ Downstream Tasks We conducted experiments on COCO [28], ADE20K [62], and PASCAL VOC [14] benchmarks to widely evaluate LoRand's performance on main dense prediction tasks.
215
+
216
+ COCO 2017 [28] is the most commonly used dataset for object detection and instance segmentation, which contains 118K training and 5K validation images. We perform experiments on the validation set. For a fair comparison, all experiments performed on COCO employ Cascade MASK R-CNN [32] as the detector.
217
+
218
+ ADE20K [62] is the most widely used semantic segmentation dataset, which contains 20K training and 2K validation images. We also conduct experiments on the ADE20K validation set and utilise UperNet [57] as the framework.
219
+
220
+ PASCAL VOC 0712 [14] is also widely used in object detection, which contains about 16K training and 5K validation images. VOC 0712 is much smaller than the latest benchmarks, so we treat it as a low-resource case. We adopt Faster RCNN [42] as the detector for VOC 0712.
221
+
222
+ All our experiments are conducted with 8x NVIDIA Tesla V100 GPUs. The experiments on PASCAL VOC and
223
+
224
+ <table><tr><td rowspan="2">Swin-L (198M)</td><td rowspan="2">Trained* Params</td><td rowspan="2">%</td><td rowspan="2">ΔFull</td><td rowspan="2">Extra Structure</td><td colspan="2">Pascal VOC (Faster RCNN)</td><td colspan="2">ADE20K (UperNet)</td></tr><tr><td>APBox</td><td>ΔLoRand</td><td>mIoU</td><td>ΔLoRand</td></tr><tr><td colspan="9">Baselines</td></tr><tr><td>FULL</td><td>198.58 M</td><td>100.00 %</td><td>-</td><td>X</td><td>84.43 %</td><td>- 2.69 %</td><td>53.25 %</td><td>+ 1.34 %</td></tr><tr><td>FIXED</td><td>0.00 M</td><td>0.00 %</td><td>- 100.00 %</td><td>X</td><td>85.19 %</td><td>- 1.93 %</td><td>32.21 %</td><td>- 19.70 %</td></tr><tr><td>ADAPTER-B</td><td>32.04 M</td><td>16.13 %</td><td>- 83.87 %</td><td>✓</td><td>80.93 %</td><td>- 6.19 %</td><td>46.23 %</td><td>- 5.68 %</td></tr><tr><td>ADAPTER-T</td><td>16.04 M</td><td>8.08 %</td><td>- 91.92 %</td><td>✓</td><td>78.10 %</td><td>- 9.02 %</td><td>43.51 %</td><td>- 8.40 %</td></tr><tr><td colspan="9">Our Methods</td></tr><tr><td>LORAND</td><td>3.59 M</td><td>1.84 %</td><td>- 98.16 %</td><td>✓</td><td>87.12 %</td><td>-</td><td>50.67 %</td><td>-</td></tr><tr><td>LORAND+</td><td>7.19 M</td><td>3.62 %</td><td>- 96.38 %</td><td>✓</td><td>87.63 %</td><td>+ 0.51 %</td><td>51.13 %</td><td>+ 0.46 %</td></tr><tr><td>LORAND++</td><td>14.24 M</td><td>7.17 %</td><td>- 92.83 %</td><td>✓</td><td>88.11 %</td><td>+ 0.99 %</td><td>51.87 %</td><td>+ 1.20 %</td></tr></table>
225
+
226
+ Table 1. Results of baselines and our methods on Pascal VOC and ADE20K benchmarks. Swin-L is employed as the pre-trained model here. We present the numbers and percentages of trainable backbone parameters on the left and all the performances on the right. * denotes the trainable parameters in backbones.
227
+
228
+ <table><tr><td rowspan="2">Swin-B (89M)</td><td rowspan="2">Trained* Params</td><td rowspan="2">%</td><td rowspan="2">ΔFull</td><td rowspan="2">Extra Structure</td><td colspan="4">COCO (Cascade Mask R-CNN)</td></tr><tr><td>APBox</td><td>ΔLoRand</td><td>APMask</td><td>ΔLoRand</td></tr><tr><td colspan="9">Baselines</td></tr><tr><td>FULL</td><td>89.14 M</td><td>100.00 %</td><td>-</td><td>X</td><td>51.90 %</td><td>+0.80 %</td><td>45.00 %</td><td>+0.90 %</td></tr><tr><td>FIXED</td><td>0.00 M</td><td>0.00 %</td><td>-100.00 %</td><td>X</td><td>15.30 %</td><td>-35.80 %</td><td>10.80 %</td><td>-33.8 %</td></tr><tr><td>ADAPTER-B</td><td>14.38 M</td><td>16.13 %</td><td>-83.87 %</td><td>✓</td><td>46.50 %</td><td>-4.60 %</td><td>40.20 %</td><td>-3.90 %</td></tr><tr><td>ADAPTER-T</td><td>7.20 M</td><td>8.08 %</td><td>-91.92 %</td><td>✓</td><td>43.20 %</td><td>-7.90 %</td><td>38.70 %</td><td>-5.40 %</td></tr><tr><td colspan="9">Our Methods</td></tr><tr><td>LORAND</td><td>2.39 M</td><td>2.76 %</td><td>-97.24 %</td><td>✓</td><td>51.10 %</td><td>-</td><td>44.10 %</td><td>-</td></tr><tr><td>LORAND+</td><td>4.73 M</td><td>5.31 %</td><td>-94.69 %</td><td>✓</td><td>51.20 %</td><td>+0.10 %</td><td>44.30 %</td><td>+0.20 %</td></tr><tr><td>LORAND++</td><td>9.32 M</td><td>10.46 %</td><td>-89.54 %</td><td>✓</td><td>51.50 %</td><td>+0.40 %</td><td>44.40 %</td><td>+0.30 %</td></tr></table>
229
+
230
+ Table 2. Results of baselines and our methods on COCO benchmarks. Swin-B is employed as the pre-trained model here. We present the numbers and percentages of trainable backbone parameters on the left and all the performances on the right. * denotes the trainable parameters in backbones.
231
+
232
+ ADE20K are based on Swin-S, Swin-B, and Swin-L pretrained models. Limited by GPU memory, the COCO experiments are based on Swin-T, Swin-S, and Swin-B.
233
+
234
+ # 4.2. Main Results
235
+
236
+ We first compare the trainable backbone parameters and performance of these methods on three benchmarks in Tables 1 and 2. Table 1 shows the results of PASCAL VOC and ADE20K datasets based on Swin-L, and Table 2 shows the results of COCO based on Swin-B. From Tables 1 and 2, we can see that:
237
+
238
+ 1) LoRand can effectively address the dilemma of fine-tuning in low-resource situations. Table 1 shows that FIXED outperforms FULL on the PASCAL VOC dataset, which implies that the powerful generalization ability of pre-trained model is severely weakened during fine-tuning. Fine-tuning with low-resource data reduces the feature understanding of pre-trained models, which leads to the poor performance on downstream tasks. LoRand avoids this dis
239
+
240
+ advantage by fixing the original parameters. More importantly, LoRand can absorb features from the new data by its smaller trainable structures. Table 1 indicates that LoRand outperforms FULL and FIXED by $2.69\%$ and $1.93\%$ on the low-resource dataset with only $1.84\%$ trainable backbone parameters. LoRand+ and LoRand++ also outperform FULL by $3.2\%$ and $3.68\%$ with $3.62\%$ and $7.17\%$ backbone parameters. In fact, there are many other common computer vision datasets with similar volumes to the PASCAL VOC, including CUB-200-2011 [55], Oxford 102 Flowers [35], Stanford Cars [27], and Caltech-256 [16]. The prevalence of "Pretrained & Finetuning" leads us to focus more on giant benchmarks, but Table 1 suggests we need a better training paradigm to cope with many low-resource situations in industrial applications. LoRand-Tuning proves to be a competitive candidate who brings promising performance and parameter-efficient approaches to low-resource cases.
241
+ 2) LoRand effectively balances the number of trainable backbone parameters and downstream task per
242
+
243
+ formance. Tables 1 and 2 demonstrate that LoRand (standard) performs very closely to FULL on large benchmarks with only $1.84\%$ to $2.76\%$ trainable parameters. By tuning less than 3.6M backbone parameters, LoRand (standard) achieves $50.67\%$ (mIOU) on ADE20K, and $51.10\%$ $(\mathrm{AP}_{\mathrm{Box}})$ / $44.10\%$ $(\mathrm{AP}_{\mathrm{Mask}})$ on COCO, which is only about $1.5\%$ off on average compared to FULL. LoRand+ and LoRand++ further reduce the gap between these two paradigms to approximately $1\%$ with slight parameter increases. For Swin-L, LoRand saves about 195M parameters per copy compared to FULL. For Swin-B, LoRand saves about $86\mathrm{M}$ . These results are interesting, which means we do not have to spend plenty of hardware resources to store these redundant parameters. Industrial service providers deliver thousands of model training tasks every day. With LoRand-Tuning, millions of gigabytes per year for model storage could be saved.
244
+
245
+ 3) LoRand effectively broadens the potential of conventional parameter-efficient adapter structures in dense predictions. From the results, we can draw similar conclusions to [24] that the standard adapter [22] performs worse than fine-tuning on dense predictions. Tables 1 and 2 illustrate that the ADAPTER's performance is far from FULL, although it reduces $80\%$ of trainable backbone parameters. Also adding new structures, LoRand achieves comparable performance to FULL by training fewer parameters than the ADAPTER. Overall, Tables 1 and 2 demonstrate the feasibility of parameter-efficient tuning paradigm in visual dense prediction tasks.
246
+
247
+ Comparisons with other fine-tuned backbone. We then show the comparisons of LoRand with some other remarkable fine-tuned backbones in Table 3. Table 3a shows the results based on UperNet and ADE20K, and 3b shows the results based on Cascade MASK R-CNN and COCO. Table 3 shows that LoRand (based on Swin-Transformer) can outperform most existing fine-tuned backbones with less than 2M parameters. Compared to these backbones, LoRand not only presents more robust and superior results but also saves massive hardware resources in this era of parameter explosion. Specifically, LoRand (Swin-T) exceeds COCO by $1.9\%$ $\mathrm{(AP_{Box})}$ and $1.2\%$ $\mathrm{(AP_{Mask})}$ with 80.12M fewer new backbone parameters than ResNeXt-101-64. Similarly, LoRand (Swin-L) surpasses $5.82\%$ (mIoU) on ADE20K with 40.41M fewer trainable backbone parameters than ResNet-101.
248
+
249
+ Comparisons on different backbone scales. In addition to Swin-L and Swin-B, we also conduct extensive experiments on Swin-S and Swin-T. We illustrate the performance of baselines and LoRand on multiple backbones. Figure 5 shows the performance of the six methods on different backbone scales, which includes three Swin variants for each benchmark. As FIXED's performance on COCO and ADE20K is too low to display, we only show FIXED's re
250
+
251
+ (a) Comparisons between LoRand-Tuning and Fine-Tuning on COCO.
252
+
253
+ <table><tr><td>Backbone</td><td>Trained
254
+ Params*</td><td>APBox</td><td>APMask</td></tr><tr><td colspan="4">Fine-Tuning Paradigm</td></tr><tr><td>ResNet-101</td><td>44 M</td><td>47.9 %</td><td>41.5 %</td></tr><tr><td>ResNeXt-101-32</td><td>40 M</td><td>48.1 %</td><td>41.6 %</td></tr><tr><td>ResNeXt-101-64</td><td>81 M</td><td>48.3 %</td><td>41.7 %</td></tr><tr><td>DeiT-S</td><td>22 M</td><td>48.0 %</td><td>41.4 %</td></tr><tr><td>Swin-T</td><td>29 M</td><td>50.5 %</td><td>43.7 %</td></tr><tr><td>Swin-S</td><td>50 M</td><td>51.8 %</td><td>44.7 %</td></tr><tr><td>Swin-B</td><td>88 M</td><td>51.9 %</td><td>45.0 %</td></tr><tr><td colspan="4">LoRand-Tuning</td></tr><tr><td>LoRand (Swin-T)</td><td>0.88 M</td><td>50.2 %</td><td>42.9 %</td></tr><tr><td>LoRand (Swin-S)</td><td>1.80 M</td><td>50.7 %</td><td>43.8 %</td></tr><tr><td>LoRand (Swin-B)</td><td>2.39 M</td><td>51.1 %</td><td>44.3 %</td></tr><tr><td colspan="4">(b) Comparisons between LoRand-Tuning and Fine-Tuning on ADE20K.</td></tr><tr><td>Backbone</td><td colspan="2">Trained Params*</td><td>APMask</td></tr><tr><td colspan="4">Fine-Tuning</td></tr><tr><td>ResNet-18</td><td colspan="2">12 M</td><td>39.97 %</td></tr><tr><td>ResNet-50</td><td colspan="2">25 M</td><td>42.78 %</td></tr><tr><td>ResNet-101</td><td colspan="2">44 M</td><td>44.85 %</td></tr><tr><td>DeiT-S</td><td colspan="2">22 M</td><td>44.01 %</td></tr><tr><td>Swin-S</td><td colspan="2">50 M</td><td>49.30 %</td></tr><tr><td>Swin-B</td><td colspan="2">88 M</td><td>51.60 %</td></tr><tr><td>Swin-L</td><td colspan="2">197 M</td><td>53.25 %</td></tr><tr><td colspan="4">LoRand-Tuning</td></tr><tr><td>LoRand (Swin-S)</td><td colspan="2">1.80 M</td><td>47.33 %</td></tr><tr><td>LoRand (Swin-B)</td><td colspan="2">2.39 M</td><td>49.62 %</td></tr><tr><td>LoRand (Swin-L)</td><td colspan="2">3.59 M</td><td>50.67 %</td></tr></table>
255
+
256
+ Table 3. Comparisons between LoRand-Tuning and Fine-Tuning on ADE20K and COCO. We fine-tune multiple backbones and compare their performances with LoRand series. Architectures in (a) and (b) are Cascade Mask R-CNN and UperNet. Parameters in decoder and head are updated in both paradigms. * denotes the trainable parameters in backbones.
257
+
258
+ sults in the PASCAL VOC. Figure 5 indicates that the performance of most methods improves as the backbone scale gets larger. For the LoRand series, more parameters bring better performance, but it is still challenging to outperform FULL on large datasets. For the ADAPTER, ADAPTER-B performs better than ADAPTER-T, suggesting that adding extra parameters does help improve adapter-tuning performance. Experiments on Swin variants systematically
259
+
260
+ ![](images/4799d8629c462496de8ce6fadebc123984f57f3c6291d7d764ef3994d63168c8.jpg)
261
+
262
+ ![](images/f1de32ed5f910da96337d105f6c2a278695e25f9594dff66acc215e4c8e21230.jpg)
263
+
264
+ ![](images/cf498dcc0f0551d854e6089165108833da42aef14b8789312e63f10306302f53.jpg)
265
+
266
+ ![](images/780459435f7dcd851bc2a17b3a882ae4f0dd704c150286faeb4d32782ced5b5b.jpg)
267
+
268
+ ![](images/c8f8fb199094b4d44297a887101b9142ca0aa4a8533d9a78d2804c16a7a98a80.jpg)
269
+ Figure 5. Seven methods on different backbone scales. Figures show results on PASCAL VOC, COCO, and ADE20K from left to right. Swin-S, Swin-B, and Swin-L are employed as the pre-trained models for PASCAL VOC and ADE20K. Swin-T, Swin-S, and Swin-B are employed for COCO. FIXED's performances are so low on COCO and ADE20K that they reduce the intuitiveness of the other six methods, so FIXED is only presented in PASCAL VOC comparisons.
270
+
271
+ ![](images/8beae4f9a09d00f90cecc036a470adcd74773704abd7430320102b6b1e266291.jpg)
272
+
273
+ ![](images/128d7434bbe4dec4e4d11a3c91b0ddb00bf1ce980db749fe4ff84624fbeb0a25.jpg)
274
+
275
+ ![](images/80ef1c9114fff622842c4aebee06a2c60b1be5b51ab9208d703be709a5193c33.jpg)
276
+ Figure 6. Ablation Study for $\alpha$ and $\beta$ . $\alpha$ ranges from 2, 4, 6, and $\beta$ ranges from 4, 8, 16. Figures from left to right present experiments on three benchmarks respectively. We only present $\mathrm{AP_{Box}}$ changes for COCO benchmark considering the strong correlation between the values of $\mathrm{AP_{Box}}$ and $\mathrm{AP_{Mask}}$ in COCO.
277
+
278
+ demonstrate that LoRand can outperform both FULL and traditional adapter structures in low-resource cases and perform very closely to FULL in large benchmarks.
279
+
280
+ # 4.3. Ablation Study
281
+
282
+ In this section, we ablate two key hyperparameters in LoRand: the LoRand branch number $\alpha$ and the kernel matrix dimension $\beta$ . $\alpha$ affects the distributed decision-making of LoRand, while $\beta$ focuses on a single branch's learning capability and consistency.
283
+
284
+ Several sets of ablation experiments are designed and implemented to investigate the effect of $\alpha$ and $\beta$ on the performance of LoRand. The ablation experiments were conducted on the same three benchmarks. In order to improve the upper limit of LoRand, our experiments are conducted on the largest backbone of each dataset (ADE20K/PASCAL VOC: Swin-L, COCO: Swin-B). The value sets of $\alpha$ and $\beta$ are $\{2,4,6\}$ and $\{4,8,16\}$ . Figure 6 shows the results of ablation studies on three datasets. In most cases, LoRand's performance increases slightly as $\alpha$ and $\beta$ become larger but hardly outperforms fine-tuning on large benchmarks. Besides, exponentially increasing the size of the LoRand does
285
+
286
+ not result in an equivalent performance improvement and even leads to a reduction ( $\alpha = 6$ in VOC and COCO). Ablation studies demonstrate that larger LoRands have fewer gains both in parameter efficiency and performance. We have considered this trade-off when designing the LoRand standard, LoRand+, and LoRand++.
287
+
288
+ # 5. Conclusion
289
+
290
+ This paper presents LoRand, a parameter-efficient low-rank adapter for dense predictions, which completely shares the feature understanding of advanced pre-trained models and effectively transfers it to downstream tasks. LoRand performs on par with fine-tuning in COCO instance segmentation, ADE20K semantic segmentation, and PASCAL VOC object detection with only $1\%$ to $3\%$ trainable backbone parameters. Moreover, LoRand effectively avoids the disadvantages of the fine-tuning paradigm and delivers better performance in low-resource situations. We hope that parameter-efficient LoRand can save massive redundant storage resources and facilitate a unified training paradigm for vision and language.
291
+
292
+ # References
293
+
294
+ [1] Caisse Amisse, Mario Ernesto Jijón-Palma, and Jorge Antonio Silva Centeno. Fine-tuning deep learning models for pedestrian detection. *Boletim de Ciências Geólicas*, 27, 2021. 1
295
+ [2] Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettle-moyer, and Michael Auli. Cloze-driven pretraining of self-attention networks. arXiv preprint arXiv:1903.07785, 2019. 1
296
+ [3] Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021. 1
297
+ [4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020. 1
298
+ [5] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In European conference on computer vision, pages 213-229. Springer, 2020. 2
299
+ [6] Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jiarui Xu, et al. Mmdetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019. 5
300
+ [7] Shoufa Chen, Chongjian Ge, Zhan Tong, Jiangliu Wang, Yibing Song, Jue Wang, and Ping Luo. Adaptformer: Adapting vision transformers for scalable visual recognition. arXiv preprint arXiv:2205.13535, 2022. 3
301
+ [8] Zhe Chen, Yuchen Duan, Wenhai Wang, Junjun He, Tong Lu, Jifeng Dai, and Yu Qiao. Vision transformer adapter for dense predictions. arXiv preprint arXiv:2205.08534, 2022. 3
302
+ [9] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009. 1, 2, 5
303
+ [10] Zhengming Ding and Yun Fu. Deep transfer low-rank coding for cross-domain learning. IEEE transactions on neural networks and learning systems, 30(6):1768-1779, 2018. 3
304
+ [11] Zhengming Ding, Ming Shao, and Yun Fu. Deep low-rank coding for transfer learning. In Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015. 3
305
+ [12] Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. arXiv preprint arXiv:2002.06305, 2020. 2
306
+ [13] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2020. 2
307
+
308
+ [14] Mark Everingham, SM Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes challenge: A retrospective. International journal of computer vision, 111(1):98-136, 2015. 2, 5
309
+ [15] Christoph Feichtenhofer, Haoqi Fan, Yanghao Li, and Kaiming He. Masked autoencoders as spatiotemporal learners. arXiv preprint arXiv:2205.09113, 2022. 2
310
+ [16] Gregory Griffin, Alex Holub, and Pietro Perona. Caltech-256 object category dataset. 2007. 6
311
+ [17] Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan Yao, Ao Zhang, Liang Zhang, et al. Pre-trained models: Past, present and future. AI Open, 2:225-250, 2021. 1
312
+ [18] Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. Towards a unified view of parameter-efficient transfer learning. In International Conference on Learning Representations, 2021. 3
313
+ [19] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollar, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16000-16009, 2022. 2
314
+ [20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 3, 4
315
+ [21] Xuehai He, Chunyuan Li, Pengchuan Zhang, Jianwei Yang, and Xin Eric Wang. Parameter-efficient fine-tuning for vision transformers. arXiv preprint arXiv:2203.16329, 2022. 3
316
+ [22] Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790-2799. PMLR, 2019. 2, 3, 5, 7
317
+ [23] Fatsuma Jauro, Haruna Chiroma, Abdulsalam Y Gital, Mubarak Almutairi, M Abdulhamid Shafi'i, and Jemal H Abawajy. Deep learning architectures in emerging cloud computing architectures: Recent development, challenges and next research trend. Applied Soft Computing, 96:106582, 2020. 1
318
+ [24] Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. arXiv preprint arXiv:2203.12119, 2022. 2, 3, 7
319
+ [25] Shibo Jie and Zhi-Hong Deng. Convolutional bypasses are better vision transformer adapters. arXiv preprint arXiv:2207.07039, 2022. 3
320
+ [26] Christoph Käding, Erik Rodner, Alexander Freytag, and Joachim Denzler. Fine-tuning deep neural networks in continuous learning scenarios. In *Asian Conference on Computer Vision*, pages 588–605. Springer, 2016. 1
321
+ [27] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In Proceedings of the IEEE international conference on computer vision workshops, pages 554–561, 2013. 6
322
+
323
+ [28] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dálár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer, 2014. 1, 2, 5
324
+ [29] Fanfan Liu, Haoran Wei, Wenzhe Zhao, Guozhen Li, Jingquan Peng, and Zihao Li. Wb-detr: Transformer-based detector without backbone. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2979-2987, 2021. 2
325
+ [30] Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586, 2021. 1, 2
326
+ [31] Yen-Cheng Liu, Chih-Yao Ma, Junjiao Tian, Zijian He, and Zsolt Kira. Polyhistor: Parameter-efficient multi-task adaptation for dense vision tasks. arXiv preprint arXiv:2210.03265, 2022. 3
327
+ [32] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10012-10022, 2021. 1, 2, 5
328
+ [33] Cheng Long Li, Andong Lu, Ai Hua Zheng, Zhengzheng Tu, and Jin Tang. Multi-adapter rgbt tracking. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, pages 0-0, 2019. 3
329
+ [34] Yuning Mao, Lambert Mathias, Rui Hou, Amjad Alma-hairi, Hao Ma, Jiawei Han, Scott Yih, and Madian Khabsa. Unipelt: A unified framework for parameter-efficient language model tuning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6253-6264, 2022. 3
330
+ [35] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pages 722-729. IEEE, 2008. 6
331
+ [36] Matthew E Peters, Sebastian Ruder, and Noah A Smith. To tune or not to tune? adapting pretrained representations to diverse tasks. arXiv preprint arXiv:1903.05987, 2019. 2, 3
332
+ [37] Jonas Pfeiffer, Aishwarya Kamath, Andreas Rückle, Kyunghyun Cho, and Iryna Gurevych. Adapterfusion: Nondestructive task composition for transfer learning. In 16th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2021, pages 487-503. Association for Computational Linguistics (ACL), 2021. 2, 3
333
+ [38] Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulić, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. Adapterhub: A framework for adapting transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 46-54, 2020. 1, 2
334
+ [39] Jonathan Pilault, Christopher Pal, et al. Conditionally adaptive multi-task learning: Improving transfer learning in nlp using fewer parameters & less data. In International Conference on Learning Representations, 2020. 3
335
+
336
+ [40] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67, 2020. 1
337
+ [41] Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. Learning multiple visual domains with residual adapters. Advances in neural information processing systems, 30, 2017. 3
338
+ [42] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28, 2015. 5
339
+ [43] Carlos Riquelme, Joan Puigcerver, Basil Mustafa, Maxim Neumann, Rodolphe Jenatton, André Susano Pinto, Daniel Keysers, and Neil Houlsby. Scaling vision with sparse mixture of experts. Advances in Neural Information Processing Systems, 34:8583-8595, 2021. 4
340
+ [44] Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. Adapterdrop: On the efficiency of adapters in transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7930-7946, 2021. 2, 3
341
+ [45] Omer Sagi and Lior Rokach. Ensemble learning: A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 8(4):e1249, 2018. 4
342
+ [46] Chompunuch Sarasaen, Soumick Chatterjee, Mario Breitkopf, Georg Rose, Andreas Nurnberger, and Oliver Speck. Fine-tuning deep learning model parameters for improved super-resolution of dynamic mri with prior-knowledge. Artificial Intelligence in Medicine, 121:102196, 2021. 1
343
+ [47] Jaime Sevilla, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbahn, and Pablo Villalobos. Compute trends across three eras of machine learning. arXiv preprint arXiv:2202.05924, 2022.1
344
+ [48] Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990, 2022. 1
345
+ [49] Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in nlp. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645–3650, 2019. 1
346
+ [50] Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, et al. Scalability in perception for autonomous driving: Waymo open dataset. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2446-2454, 2020. 1
347
+ [51] Yi-Lin Sung, Jaemin Cho, and Mohit Bansal. Vl-adapter: Parameter-efficient transfer learning for vision-and-language tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5227-5237, 2022. 3
348
+
349
+ [52] B Thilagavathi, K Suthendran, and K Srujanraju. Evaluating the adaboost algorithm for biometric-based face recognition. In Data Engineering and Communication Technology, pages 669-678. Springer, 2021. 4
350
+ [53] Edna Chebet Too, Li Yujiang, Sam Njuki, and Liu Yingchun. A comparative study of fine-tuning deep learning models for plant disease identification. Computers and Electronics in Agriculture, 161:272-279, 2019. 1
351
+ [54] Thijs Vogels, Sai Praneeth Karimireddy, and Martin Jaggi. Practical low-rank communication compression in decentralized deep learning. Advances in Neural Information Processing Systems, 33:14171-14181, 2020. 3
352
+ [55] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011. 6
353
+ [56] Xudong Wang, Zhaowei Cai, Dashan Gao, and Nuno Vasconcelos. Towards universal object detection by domain attention. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7289-7298, 2019. 3
354
+ [57] Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun. Unified perceptual parsing for scene understanding. In Proceedings of the European conference on computer vision (ECCV), pages 418-434, 2018. 5
355
+ [58] Sha Yuan, Hanyu Zhao, Shuai Zhao, Jiahong Leng, Yangxiao Liang, Xiaozhi Wang, Jifan Yu, Xin Lv, Zhou Shao, Jiaao He, et al. A roadmap for big model. arXiv preprint arXiv:2203.14101, 2022. 1
356
+ [59] Aston Zhang, Yi Tay, SHUAI Zhang, Alvin Chan, Anh Tuan Luu, Siu Hui, and Jie Fu. Beyond fully-connected layers with quaternions: Parameterization of hypercomplex multiplications with $1/n$ parameters. In International Conference on Learning Representations, 2020. 2, 3
357
+ [60] Chaoning Zhang, Chenshuang Zhang, Junha Song, John Seon Keun Yi, Kang Zhang, and In So Kweon. A survey on masked autoencoder for self-supervised learning in vision and beyond. arXiv preprint arXiv:2208.00173, 2022. 2
358
+ [61] Jianwei Zhao, Yongbiao Lv, Zhenghua Zhou, and Feilong Cao. A novel deep learning algorithm for incomplete face recognition: Low-rank-recovery network. Neural Networks, 94:115-124, 2017. 3
359
+ [62] Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 633-641, 2017. 2, 5
1vs100parameterefficientlowrankadapterfordensepredictions/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5eb4d4225bf5168f4d1636c8e79bc80473e42d5359de77bd3c4a9db4a7aafa8d
3
+ size 571629
1vs100parameterefficientlowrankadapterfordensepredictions/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88d8b6a61f1d171b85c742c8a856bd9112bbfb63b532f3b7ef795e6f9b841b44
3
+ size 464495
2pcnettwophaseconsistencytrainingfordaytonightunsuperviseddomainadaptiveobjectdetection/818b1ea7-c7c2-488e-9c91-78c9a94fffa2_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3383bdd458d46dc5310e53299ed89558f714255755bef0849725ad8921878cde
3
+ size 73175
2pcnettwophaseconsistencytrainingfordaytonightunsuperviseddomainadaptiveobjectdetection/818b1ea7-c7c2-488e-9c91-78c9a94fffa2_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b405c0a35ca9e30539154b3b1be8b7a99626964ee523603ec9b667640d90dee
3
+ size 87774
2pcnettwophaseconsistencytrainingfordaytonightunsuperviseddomainadaptiveobjectdetection/818b1ea7-c7c2-488e-9c91-78c9a94fffa2_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec973672e8927fd04aab49af7ca2fc3783a2bb46fd7bfc9ea4bbbc7782e96a77
3
+ size 1846162
2pcnettwophaseconsistencytrainingfordaytonightunsuperviseddomainadaptiveobjectdetection/full.md ADDED
@@ -0,0 +1,301 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 2PCNet: Two-Phase Consistency Training for Day-to-Night Unsupervised Domain Adaptive Object Detection
2
+
3
+ Mikhail Kennerley $^{1,2}$ , Jian-Gang Wang $^{2}$ , Bharadwaj Veeravalli $^{1}$ , and Robby T. Tan $^{1}$ $^{1}$ National University of Singapore, Department of Electrical and Computer Engineering
4
+ $^{2}$ Institute for Infocomm Research, A*STAR
5
+ mikhailk@u.nus.edu, jgwang@i2r.a-star.edu.sg, elebv@nus.edu.sg, robby.tan@nus.edu.sg
6
+
7
+ # Abstract
8
+
9
+ Object detection at night is a challenging problem due to the absence of night image annotations. Despite several domain adaptation methods, achieving high-precision results remains an issue. False-positive error propagation is still observed in methods using the well-established student-teacher framework, particularly for small-scale and low-light objects. This paper proposes a two-phase consistency unsupervised domain adaptation network, 2PCNet, to address these issues. The network employs high-confidence bounding-box predictions from the teacher in the first phase and appends them to the student's region proposals for the teacher to re-evaluate in the second phase, resulting in a combination of high and low confidence pseudo-labels. The night images and pseudo-labels are scaled-down before being used as input to the student, providing stronger small-scale pseudo-labels. To address errors that arise from low-light regions and other night-related attributes in images, we propose a night-specific augmentation pipeline called NightAug. This pipeline involves applying random augmentations, such as glare, blur, and noise, to daytime images. Experiments on publicly available datasets demonstrate that our method achieves superior results to state-of-the-art methods by $20\%$ , and to supervised models trained directly on the target data.
10
+
11
+ # 1. Introduction
12
+
13
+ Nighttime object detection is critical in many applications. However, the requirement of annotated data by supervised methods is impractical, since night data with annotations is few, and supervised methods are generally prone to overfitting to the training data. Among other reasons, this scarcity is due to poor lighting conditions which makes nighttime images hard to annotate. Hence, methods that
14
+
15
+ ![](images/5263fe911ce1f0a867c50db1a23864721ae578206f0864dbf948d76be7debed0.jpg)
16
+ DA Faster-RCNN
17
+
18
+ ![](images/2e63e3fe63bff5506f7606feab4b1ad88b1cbdc1709be968f5f988018d6f6771.jpg)
19
+ UMT
20
+
21
+ ![](images/350b7bd91271c7942627218e0872edb02ac1007c41555ba9accbebbb4a8b99d5.jpg)
22
+ AT
23
+ Figure 1. Qualitative results of state-of-the-art DA methods, DA Faster-RCNN [3], UMT [7], Adaptive Teacher (AT) [15] and our method 2PCNet on the BDD100K [36] dataset. Unlike the SOTA methods, our method is able to detect dark and small scale objects with minimal additional false positive predictions.
24
+
25
+ ![](images/53f055484860341fb6f8bbfbf09bbdc6bb26422bdf984c4ac17b1fca1945835d.jpg)
26
+ 2PCNet (Ours)
27
+
28
+ do not assume the availability of the annotations are more advantageous. Domain adaptation (DA) is an efficient solution to this problem by allowing the use of readily available annotated source daytime datasets.
29
+
30
+ A few domain adaptation methods have been proposed, e.g., adversarial learning which uses image and instance level classifiers [3] and similar concepts [22, 32]. However, these methods isolate the domain adaptation task purely towards the feature extractor, and suppress features of the target data for the sake of domain invariance. Recent unsupervised domain adaptation methods exploit the studentteacher framework (e.g. [1,7,11,15]). Since the student initially learns from the supervised loss, there is a bias towards the source data. Augmentation [7, 11] and adversarial learning [15] have been proposed to address this problem. Unfortunately, particularly for day-to-night unsupervised domain adaptation, these methods suffer from a large num
31
+
32
+ ber of inaccurate pseudo-labels produced by the teacher. In our investigation, the problem is notably due to insufficient knowledge of small scale features in the nighttime domain, which are then propagated through the learning process between the teacher and student, resulting in poor object detection performance.
33
+
34
+ To address the problem, in this paper, we present 2PC-Net, a two-phase consistency unsupervised domain adaptation network for nighttime object detection. Our 2PCNet merges the bounding-boxes of highly-confident pseudolabels, which are predicted in phase one, together with regions proposed by the student's region proposal network (RPN). The merged proposals are then used by the teacher to generate a new set of pseudo-labels in phase two. This provides a combination of high and low confidence pseudolabels. These pseudo-labels are then matched with predictions generated by the student. We can then utilise a weighted consistency loss to ensure that a higher weightage of our unsupervised loss is based on stronger pseudo-labels, yet allow for weaker pseudo-labels to influence the training.
35
+
36
+ Equipped with this two-phase strategy, we address the problem of errors from small-scale objects. We devise a student-scaling technique, where night images and their pseudo-labels for the student are deliberately scaled down. In order to generate accurate pseudo-labels, images to the teacher remain at their full scale. This results in the pseudolabels of larger objects, which are easier to predict, to be scaled down to smaller objects, allowing for an increase in small scale performance of the student.
37
+
38
+ Nighttime images suffer from multiple complications not found in daytime scenes such as dark regions, glare, prominent noise, prominent blur, imbalanced lighting, etc. All these cause a problem, since the student, which was trained on daytime images, is much more biased towards the daytime domain's characteristics. To mitigate this problem, we propose NightAug, a set of random nighttime specific augmentations. NightAug includes adding artificial glare, noise, blur, etc. that mimic the night conditions to daytime images. With NightAug we are able to reduce the bias of the student network towards the source data without resulting in adversarial learning or compute-intensive translations. Overall, using 2PCNet, we can see the qualitative improvements of our result in Figure 1. In summary, the contributions of this paper are as follows:
39
+
40
+ - We present 2PCNet, a two-phase consistency approach for student-teacher learning. 2PCNet takes advantage of highly confident teacher labels augmented with less confident regions, which are proposed by the scaled student. This strategy produces a sharp reduction of the error propagation in the learning process.
41
+ - To address the bias of the student towards the source domain, we propose NightAug, a random night spe
42
+
43
+ cific augmentation pipeline to shift the characteristics of daytime images toward nighttime.
44
+
45
+ - The effectiveness of our approach has been verified by comparing it with the state-of-the-art domain adaptation approaches. An improvement of $+7.9\mathrm{AP}(+20\%)$ and $+10.2\mathrm{AP}(26\%)$ over the SOTA on BDD100K and SHIFT has been achieved, respectively.
46
+
47
+ # 2. Related Work
48
+
49
+ Unsupervised Domain Adaptation (UDA) Unsupervised domain adaptation aims to learn transferable features to reduce the discrepancy between a labelled source and unlabelled target domain. Previous works minimised the distance metric (MMD) [16-18] and considered intra-class and inter-class discrepancy [12, 13]. Adversarial feature learning involved adding an adversarial classifier to play the min-max game between the domain discriminator and feature extractors to generate a domain invariant feature map [27, 28, 37]. These methods have been applied to image classification. Our work focuses on object detection, which is more complex as it involves identifying multiple bounding boxes and associated classes in each image.
50
+
51
+ UDA for Object Detection Object detection with UDA is a recent challenge due to the complexities of identifying multiple objects in an image. DA-Faster RCNN [3] integrated adversarial learning with image and instance level classifiers, and several approaches have been proposed to improve on this method by introducing scale-awareness [4], class specific discriminators [31], and re-purposing the task-specific classifier as a discriminator [2]. The Mean Teacher (MT) framework [26] has been adopted in semi-supervised methods, such as UMT [7], which incorporates CycleGAN [39] augmented images; AT [15], which combines the student-teacher framework with adversarial learning; and TDD [11], which uses dual student-teacher networks with style transfer.
52
+
53
+ Nighttime UDA The majority of research on unsupervised domain adaptation (UDA) in nighttime scenarios has focused on semantic segmentation [5, 8, 9, 14, 23, 29, 33]. Translation and style transformation techniques are commonly used to reduce the domain gap between the source and target domains in these methods [8,29,33]. Some UDA-based techniques for nighttime also utilise paired-images to generate a shared feature space [23], while others use an intermediate domain such as twilight to reduce the domain gap during unsupervised learning [5].
54
+
55
+ Nighttime tracking has also been investigated where adversarial transformers are used to close the domain gap [35]. However, there is a gap in research when it comes to applying UDA techniques in the object detection task for night-
56
+
57
+ ![](images/d79fc3147efb095c9d9a480464ef2004c703c1cc7d2c0b76ce09ed2f1902d44e.jpg)
58
+ Figure 2. Overview of our proposed framework, 2PCNet. 2PCNet consists of: A student network is trained on both the labelled daytime image, which has been augmented with NightAug, and unlabelled nighttime images. A teacher network which is the exponential moving average (EMA) of the student and provides matched pseudo-labels for unsupervised loss. The match pseudo-labels are the predictions of the teacher (phase two) using the RPN proposals of the student, which in turn was guided by the high confidence pseudo-labels of the teacher (phase one).
59
+
60
+ time scenarios. Therefore, we explore the application of UDA techniques in object detection under low-light and nighttime conditions.
61
+
62
+ # 3. Proposed Method
63
+
64
+ Let $\mathbf{D}_s$ be the daytime source data. $\mathbf{D}_s = \{I_s, C_s, B_s\}$ , where the variables refer to the image, class label and bounding-box label, respectively. Index $s$ indicates the daytime source. The night target data is represented by $\mathbf{D}_t$ , where $\mathbf{D}_t = \{I_t\}$ as we do not have the target labels available to us. Index $t$ indicates the nighttime target.
65
+
66
+ The architecture of our 2PCNet is shown in Figure 2. Our 2PCNet consists of a student and a teacher network. The student is a multi-domain network trained on both labelled daytime images, augmented with NightAug, and unlabelled nighttime images. The teacher focuses on night images to produce pseudo-labels for the student and is the exponential moving average (EMA) of the student. After an initial pretraining phase, the teacher begins producing pseudo-labels, which allows the student to initialise the feature extractor and detector.
67
+
68
+ During each iteration, in phase one of 2PCNet, the teacher produces pseudo-labels from the night images. These pseudo-labels are filtered through a confidence
69
+
70
+ threshold. This is to ensure only high-confidence pseudolabels are given to the student. The bounding-boxes from the pseudo-labels are then combined with the region proposals generated by the student's RPN. The merged region proposals are then used to generate predictions from the student's RoI network. In phase two, the teacher utilises the same merged region proposals to generate a matched set of pseudo-labels, where each pseudo-label has its corresponding prediction obtained from the student.
71
+
72
+ As mentioned earlier, our student network is initialised by pretraining for a set number of iterations. This is done with supervised loss on the augmented daytime images:
73
+
74
+ $$
75
+ L _ {\sup } = L _ {\operatorname {r p n}} \left(B _ {s}, I _ {s}\right) + L _ {\operatorname {r o i}} \left(B _ {s}, C _ {s}, I _ {s}\right), \tag {1}
76
+ $$
77
+
78
+ where $L_{\mathrm{rpn}}$ represents the loss from the RPN, which consists of an objectness and bounding-box regression loss. $L_{\mathrm{roi}}$ represents the loss from the detector network, consisting of a classification and bounding-box regression loss.
79
+
80
+ Once the pretraining is completed, the student's weights are then transferred over to the teacher. In the succeeding iterations, the teacher's weights are the exponential moving average (EMA) of the student's. The matched pseudo-labels generated by the teacher, $\{C_p^*, B_p^*\}$ , are then used to guide
81
+
82
+ ![](images/c67cd0ceb3844b100df47930828639c1f30a507ba48e45e37211aff677e01841.jpg)
83
+ Figure 3. (Left to Right, Top to Bottom) Ground truth bounding boxes, bounding boxes predicted by the teacher with non-maximal suppression (NMS) and thresholding $(B_{p})$ , bounding boxes predicted by the student $(B_{\mathrm{student}})$ which is guided by $B_{p}$ , and the bounding boxes predicted by the teacher $(B_{p}^{*})$ for the consistency loss.
84
+
85
+ ![](images/19d2798780ffd2eaa87261a168fdf6db1a82fe1e65293999ac403404e0f935dc.jpg)
86
+
87
+ the unsupervised loss, defined as:
88
+
89
+ $$
90
+ L _ {\text {u n s u p}} = L _ {\text {r p n}} ^ {\text {o b j}} \left(C _ {p} ^ {*}; I _ {t}\right) + L _ {\text {c o n s}} \left(C _ {p} ^ {*}; I _ {t}\right), \tag {2}
91
+ $$
92
+
93
+ where $L_{\mathrm{rpn}}^{\mathrm{obj}}$ is the objectness loss of the RPN and $L_{\mathrm{cons}}$ is the weighted KL-Divergence loss from the predicted outputs which we will further explain in the next section.
94
+
95
+ # 3.1. Two-Phase Consistency
96
+
97
+ Due to the large domain gap between daytime source images and nighttime target images, the teacher is unable to produce high quality pseudo-labels. This generally occurs in the whole scene, but particularly for regions with strong night characteristics, e.g., low-light, glare, uneven lighting, etc. The teacher produces confident pseudo-labels only for regions that share more similarities to the daytime, since it is biased towards the daytime domain. This bias poses a problem for methods that employ a hard-threshold to filter pseudo-labels for categorical cross-entropy loss [7, 15, 26]. The remaining pseudo-labels contain only easy samples with daytime attributes. Consequently, the student does not learn from harder (e.g. darker) areas.
98
+
99
+ As a result of minimal knowledge of the hard samples (i.e., areas with a high level of nighttime attributes), the teacher begins to predict highly confident yet incorrect pseudo-labels. As the teacher provides these incorrect pseudo-labels to the student, a viscous cycle starts where the teacher in turn is updated with incorrect knowledge. Consequently, the error continues to propagate through training. In our case, these errors notably occur in dark/glare regions and as small scale objects.
100
+
101
+ To address the problem of error propagation, we design a two-phase approach that combines high confidence
102
+
103
+ pseudo-labels together with their less confident counterparts. This combination allows for the high accuracy of confident-labels with the additional knowledge of less confident labels to be distilled onto the student. In phase one, the unlabelled nighttime image, $I_{t}$ , is used as an input for the teacher to generate pseudo-labels. These pseudo-labels are filtered with a threshold to retain only high-confidence pseudo-labels, $(C_p, B_p)$ . The bounding-box of the pseudolabels, $B_{p}$ , is then used as an input to the student. $B_{p}$ is concatenated to the region proposals generated by the student RPN module:
104
+
105
+ $$
106
+ P ^ {*} = \operatorname {R P N} _ {\text {s t u d e n t}} \left(I _ {t}\right) \neq B _ {p}, \tag {3}
107
+ $$
108
+
109
+ where $P^{*}$ is the combined region proposals, which are then used as an input to the student's RoI module to predict the classes, $C_{\mathrm{student}}$ , and bounding-box, $B_{\mathrm{student}}$ , of each region proposal.
110
+
111
+ Phase two begins by using the same combined region proposals, $P^{*}$ , generated in phase one as an input to the teachers RoI module to generate a matched set of pseudolabels:
112
+
113
+ $$
114
+ \left\{C _ {p} ^ {*}, B _ {p} ^ {*} \right\} = \operatorname {R o I} _ {\text {t e a c h e r}} \left(P ^ {*}\right). \tag {4}
115
+ $$
116
+
117
+ The difference between $C_p$ and $C_p^*$ is that $C_p^*$ is derived from the same region proposals as that of the student predictions $C_{\mathrm{student}}$ . This allows us to compare $C_{\mathrm{student}}$ and $C_p^*$ directly:
118
+
119
+ $$
120
+ \begin{array}{l} \left\{C _ {\text {s t u d e n t}} (n), B _ {\text {s t u d e n t}} (n) \right\} = \operatorname {R o I} _ {\text {s t u d e n t}} \left(P ^ {*} (n)\right), \tag {5} \\ \left\{C _ {p} ^ {*} (n), B _ {p} ^ {*} (n) \right\} = \operatorname {R o I} _ {\text {t e a c h e r}} \left(P ^ {*} (n)\right), \\ \end{array}
121
+ $$
122
+
123
+ where $n = \{1,2,\dots,N\}$ and $N$ is the number of region proposals in $P^*$ . This operation ensures that the knowledge of highly confident predictions generated by the teacher is distilled through to the student. In addition, information from less confident predictions can also be learnt. However, we are still required to penalise less confident samples and thus employ weighed KL-Divergence to be used as our consistency loss:
124
+
125
+ $$
126
+ L _ {\text {c o n s}} = \alpha \operatorname {K L} \left(C _ {\text {s t u d e n t}}, C _ {p} ^ {*}\right), \tag {6}
127
+ $$
128
+
129
+ where $\alpha$ is the highest confidence of $C_p^*$ expressed as $\alpha = \max(C_p^*)$ ; KL() is the KL-divergence function. Note that, pseudo-bounding boxes are not used to generate unsupervised loss, as the confidence score of each pseudo-label represents the class information rather than the bounding box. The outputs of each segment of our two-phase approach are shown in Figure 3.
130
+
131
+ # 3.2. Student-Scaling
132
+
133
+ In our investigation, we have found that scales of objects have a strong influence on object detection at night. This
134
+
135
+ Algorithm 1 Single Augmentation - NightAug
136
+ imgClean $\leftarrow$ img
137
+ if randFloat $\geq 0.5$ then randFloat $\leftarrow 0.8*$ randFloat $+0.2$ img $\leftarrow$ augmentation(img, randval) prob $\leftarrow 0.4$ while randFloat $\geq$ prob do $x\gets$ randInt(img.shape[1],2) $y\gets$ randInt(img.shape[2],2) img[x,y] $\leftarrow$ imgClean[x,y] prob $\leftarrow$ prob +0.1 end while
138
+ end if
139
+
140
+ is due to the features of smaller objects being easily overwhelmed by glare or noise. To allow the student to overcome this, we apply scaling augmentation to the student's inputs which includes both the image and the pseudo-labels generated by the teacher. As training proceeds, we follow a schedule to increase the scale of the student augmentation until it equals to that of the original image. By iteratively increasing the scale we allow the student to focus on smaller features earlier in the training process. This process encourages the teacher to make more accurate predictions on smaller scale objects in the later stages of training. In turn, accurate small scale pseudo-labels allow for the increase in the scale of the student's inputs with minimal errors due to scale.
141
+
142
+ To ensure the knowledge of the previous scales is not forgotten, a gaussian function for the scaling factor is applied. The norm of the Gaussian function is obtained from the schedule values. To prevent additional noise due to pseudo-labels being too small, labels that has an area below a threshold are removed.
143
+
144
+ # 3.3. NightAug
145
+
146
+ Night images suffer from a range of complications that are not present in daytime scenes. This causes a problem in the student-teacher framework, where the student would be biased towards the source domain. Previous methods have attempted to address this, but have either required compute-intensive translations [7, 11] or adding additional domain classifiers to the framework [15] which complicates training. We propose NightAug, a nighttime specific augmentation pipeline that is compute-light and does not require training. NightAug consists of a series of augmentations with the aim of steering the characteristics of daytime images to resemble that of a nighttime image.
147
+
148
+ The defining features of nighttime images are that they are darker and have lower contrast than daytime images. In addition the signal-to-night ratio (SNR) could be higher due to the properties of digital cameras such as luminance and
149
+
150
+ ![](images/44477bcfe46dd0b404ebc19a5eabcfb95708b6ebf6fb6883592a6a5e4c257b7b.jpg)
151
+ Figure 4. NightAug: Original image (top-left) and images with random augmentations from: gaussian blur, gamma correction, brightness, contrast, glare, gaussian noise and random cut-outs.
152
+
153
+ ![](images/28b82fb10b3a24991f39d715a100a831e46d1756277035c6f05cde59023c911f.jpg)
154
+
155
+ colour noise. Glare and glow from street lamps and headlights are also present in nighttime images. Additionally, images may be out-of-focus due to the cameras inability to detect reference points to focus on in dark environments.
156
+
157
+ Keeping in mind the properties of nighttime images, our NightAug includes random; brightness, contrast, gamma, gaussian noise, gaussian blur augmentations and random glare insertion. The augmentations are randomly applied to the images and are also random in intensity. This randomness results in a wider variance of images that are exposed to the student leading to more robust training [30]. To further increase the variance of the images, at each augmentation step, random segments of the image will ignore the application of that augmentation. This allows for the representation where different areas of nighttime images may be unevenly lighted. This uneven lighting affects the above characteristics of the local region.
158
+
159
+ A single augmentation flow of NightAug is demonstrated in Algorithm 1. Samples of an image processed with NightAug are shown in Figure 4. Each augmentation has a set probability of being applied, with the strength of the augmentation being random. Random regions of the augmented image may then be replaced with that of the original image. The probability of this region replacement reduces with each iteration.
160
+
161
+ Overall Loss Our total loss can be represented as:
162
+
163
+ $$
164
+ L _ {\text {t o t a l}} = L _ {\sup } + \lambda L _ {\text {u n s u p}}, \tag {7}
165
+ $$
166
+
167
+ where $\lambda$ represents a weight factor for the unsupervised loss, and is set experimentally. $L_{\mathrm{sup}}, L_{\mathrm{unsup}}$ refer to Eq. (1) and Eq. (2), respectively.
168
+
169
+ <table><tr><td>Method</td><td>AP</td><td>Pedestrian</td><td>Rider</td><td>Car</td><td>Truck</td><td>Bus</td><td>Motorcycle</td><td>Bicycle</td><td>TrafficLight</td><td>TrafficSign</td></tr><tr><td>Lower-Bound</td><td>41.1</td><td>50.0</td><td>28.9</td><td>66.6</td><td>47.8</td><td>47.5</td><td>32.8</td><td>39.5</td><td>41.0</td><td>56.5</td></tr><tr><td>Upper-Bound</td><td>46.2</td><td>52.1</td><td>35.0</td><td>73.6</td><td>53.5</td><td>54.8</td><td>36.0</td><td>41.8</td><td>52.2</td><td>63.3</td></tr><tr><td>DA F-RCNN [3]</td><td>41.3</td><td>50.4</td><td>30.3</td><td>66.3</td><td>46.8</td><td>48.3</td><td>32.6</td><td>41.4</td><td>41.0</td><td>56.2</td></tr><tr><td>TDD [11]</td><td>34.6</td><td>43.1</td><td>20.7</td><td>68.4</td><td>33.3</td><td>35.6</td><td>16.5</td><td>25.9</td><td>43.1</td><td>59.5</td></tr><tr><td>UMT [7]</td><td>36.2</td><td>46.5</td><td>26.1</td><td>46.8</td><td>44.0</td><td>46.3</td><td>28.2</td><td>40.2</td><td>31.6</td><td>52.7</td></tr><tr><td>AT [15]</td><td>38.5</td><td>42.3</td><td>30.4</td><td>60.8</td><td>48.9</td><td>52.1</td><td>34.5</td><td>42.7</td><td>29.1</td><td>43.9</td></tr><tr><td>2PCNet (Ours)</td><td>46.4</td><td>54.4</td><td>30.8</td><td>73.1</td><td>53.8</td><td>55.2</td><td>37.5</td><td>44.5</td><td>49.4</td><td>65.2</td></tr></table>
170
+
171
+ Table 1. Results of day-to-night domain adaptation on the BDD100K dataset, the Average Precision (AP) of all classes are reported. Faster RCNN detector with ResNet-50 feature extractor is used for all experiments to ensure a fair comparison. Faster RCNN is used as the lower-bound and upper-bound and is trained on labelled daytime and nighttime data respectively. The lower-bound provides a baseline without any domain adaptation while the upper-bound is fully supervised, the case where labelled target night data is available.
172
+
173
+ <table><tr><td>Method</td><td>APcoco</td><td>Car</td><td>Bus</td><td>Truck</td></tr><tr><td>Lower-Bound</td><td>22.1</td><td>37.5</td><td>29.8</td><td>30.7</td></tr><tr><td>Upper-Bound</td><td>23.9</td><td>42.0</td><td>33.8</td><td>35.0</td></tr><tr><td>FDA [34]</td><td>22.6</td><td>38.5</td><td>37.2</td><td>23.2</td></tr><tr><td>ForkGAN [38]</td><td>22.9</td><td>41.2</td><td>33.3</td><td>32.1</td></tr><tr><td>2PCNet (Ours)</td><td>23.5</td><td>40.7</td><td>38.2</td><td>35.0</td></tr></table>
174
+
175
+ Table 2. Comparison of our framework, 2PCNet, with image-to-image (I2I) translation methods. Conducted on the BDD100K dataset. ForkGan and FDA are used for comparison. Reported $AP_{coco}$ is the averaged AP over IoUs 0.5 to 0.95.
176
+
177
+ # 4. Experiments
178
+
179
+ # 4.1. Baselines
180
+
181
+ To evaluate our method, we compare our approach with SOTA methods in domain adaptation for object detection. These include DA-Faster RCNN [3], TDD [11], UMT [7], AT [15] as well as a non-DA baseline Faster-RCNN [21]. Faster-RCNN is used as both our lower and upper-bound, where it is trained on labelled source and target data respectively. We additionally compare our approach with image-to-image translation methods, ForkGAN [38] and FDA [34]. Translation methods are trained on Faster RCNN with both the daytime and translated images.
182
+
183
+ # 4.2. Datasets
184
+
185
+ The majority of existing nighttime datasets either focuses on semantic segmentation which do not provide labels for object detection [5, 23, 24], or contains very few classes [19, 20]. BDD100K [36] was selected as it provides object detection labels which includes a wide range of classes (10). It also has a large number of images compared to other DA datasets covering daytime, nighttime and other adverse conditions.
186
+
187
+ The SHIFT [25] dataset is a recent simulated driving dataset that contains scenes in various environments. A continuous shift of these environments is available. SHIFT contains 6 class labels that share similarities to the BDD100K classes. For our evaluation, we use images with the 'day' and 'night' label as our source and target data respectively. We further ensure that the weather tag is 'clear' to isolate other weather conditions from the evaluation.
188
+
189
+ # 4.3. Implementation
190
+
191
+ Following previous SOTA methods, we employ Faster-RCNN [21] as our base detection model and ResNet-50 [10] pretrained on ImageNet [6] as our feature extractor. All images are scaled by resizing its shorter side to 600 pixels. For student-scaling we set a schedule for (0.57, 0.64, 0.71, 0.78, 0.85, 0.92) of the maximum iterations at scales (0.5, 0.6, 0.7, 0.8, 0.9, 1.0). Loss hyperparameters are set at $\lambda = 0.3$ and the rate smooth coefficient parameter of the EMA is 0.9996. A confidence threshold of 0.8 for phase one of Two-Phase Consistency. For the initial pretraining of the student model, we train the student for 50k and 20k iterations on the source images, for BDD100K and SHIFT respectively. Supervised inputs are daytime images with and without NightAug. We then copy the weights to the teacher and continue training with the addition of unsupervised loss for an additional 50k iterations. The learning rate is kept at 0.04 throughout training. Our network is trained on 3 RTX3090 GPUs with a batch-size of 6 source and 6 target images.
192
+
193
+ # 4.4. Comparison to SOTA
194
+
195
+ Comparison on BDD100K We compare our method against the SOTA on real driving scenes and evaluating their domain adaptation performance on nighttime images, the results of this experiment can be seen on Table 1. The results show that our method achieves the highest perfor
196
+
197
+ ![](images/260a68fcdf8a6dfda8ed4d951a9e734559f34114dc70c808af48e92e2eeabd0c.jpg)
198
+ Figure 5. Qualitative results of Faster RCNN, Adaptive Teacher (AT) and our method on the SHIFT dataset with the ground-truth on the far right. We can observe that Faster RCNN is not able to detect objects due to absence of domain adaptation, while AT has a large number of small false positive bounding boxes compared to our method which closely resembles that of the ground-truth.
199
+
200
+ ![](images/fa1c9a5062df328949db62faac6c53ac246b0a74b845931868e4ae6e10da8b1d.jpg)
201
+
202
+ ![](images/118d34ec49e84b21fdf24b602e2d317adbad87c2f07af0c8f63bcb2c7fe80499.jpg)
203
+
204
+ ![](images/85899240049ff1fc50d9929c44f154e9f808a51c616d22cd1432f4ba874464f8.jpg)
205
+
206
+ ![](images/52216da15ee92e9d03f9f345c2ca5f962ae1e5bd73a17eec5bd7ac074b3d2650.jpg)
207
+
208
+ <table><tr><td>Method</td><td>AP</td><td>Per.</td><td>Car</td><td>Truck</td><td>Bus</td><td>Mcy.</td><td>Bcy.</td></tr><tr><td>Lower-Bound</td><td>41.6</td><td>40.4</td><td>44.5</td><td>49.9</td><td>53.7</td><td>14.3</td><td>46.7</td></tr><tr><td>Upper-Bound</td><td>47.0</td><td>49.7</td><td>51.5</td><td>56.0</td><td>53.6</td><td>19.2</td><td>52.4</td></tr><tr><td>DA FR [3]</td><td>43.7</td><td>43.0</td><td>48.8</td><td>47.8</td><td>52.1</td><td>19.9</td><td>55.8</td></tr><tr><td>UMT [7]</td><td>31.1</td><td>7.7</td><td>47.5</td><td>18.4</td><td>46.8</td><td>16.6</td><td>49.2</td></tr><tr><td>AT [15]</td><td>38.9</td><td>25.8</td><td>33.0</td><td>54.7</td><td>49.5</td><td>20.7</td><td>52.3</td></tr><tr><td>2PCNet (Ours)</td><td>49.1</td><td>51.4</td><td>54.6</td><td>54.8</td><td>56.6</td><td>23.9</td><td>54.2</td></tr></table>
209
+
210
+ Table 3. Results of Day-to-Night domain adaptation on the SHIFT dataset. The Average Precision (AP) of all classes. Faster RCNN is used as the lower-bound and upper-bound and is trained on labelled daytime and nighttime data respectively.
211
+
212
+ mance with an AP of 46.4. $20.5\%$ higher than that of the SOTA student-teacher methods and above that of the upper-bound. We have observed in experiments that student-teacher methods underperforms with an AP below that of the lower-bound due to the error-propagation from noisy pseudo-labels. The result of the error is small false positive detections as seen in Figure 1. Our method does not suffer from the same allowing for higher performance. We can also observe that our method performs well across all classes. Even when compared with the upper-bound, 2PC-Net achieves higher AP on the majority of classes. This indicates that our method is able to generalise well across large and small classes.
213
+
214
+ The comparison with image-to-image translation methods is shown in Table 2. Translation methods do not suffer from the error propagation problem as it is trained on Faster RCNN without a teacher. Even so, we can see that our method outperforms SOTA adverse vision translation
215
+
216
+ methods.
217
+
218
+ Comparison on SHIFT To further compare our method with SOTA we evaluate on the SHIFT simulation dataset. Due to the nature of the simulated data, many nighttime image characteristics that we have previously mention is not exhibited in this data such as blurriness, noise and glare.
219
+
220
+ The results of this experiments are shown in Table 3. We can observe that previous SOTA methods that use the student-teacher framework perform worse than the lower-bound. The sub-par performance is again due to the error-propagation problem. AT performs better than UMT due to ATs inclusion of adversarial learning. However, adversarial learning is not enough to mitigate this problem. We can see that the performance of DA FRCNN outperforms both the SOTA student-teacher methods as it would not be affected by error-propagation. It is however, still largely below the upper-bound performance. 2PCNet outperforms these previous methods as well as the upperbound. We achieve an improvement of $+10.2$ AP over previous SOTA student-teacher methods and $+2.1$ AP over that of the upper-bound.
221
+
222
+ # 4.5. Ablation Studies
223
+
224
+ To demonstrate the effectiveness of each of our components, we train several models for 100K iterations and evaluate them on the BDD100K dataset. We present our findings in Table 4.
225
+
226
+ Two-Phase Consistency We can observe in Table 4 that the addition of Two-Phase Consistency (C) demonstrated a wide performance gap when compared to the Mean-Teacher baseline, +13.5 AP (43%). This improvement in AP ex
227
+
228
+ ![](images/808250427ec14cf82c8ac96e883ffd23fd2c6af9f58f5aeface3c8be797b87c7.jpg)
229
+ Figure 6. Training curve on BDD100K dataset ablation study. We show the overall AP training curve as well as the AP of large, medium and small objects. MT represents the base Mean Teacher framework. It can be seen that at all scales, the absence of Two-Phase Consistency (C) results in a sharp drop during training. We can also see that with the inclusion of NightAug (NA) and student-scaling (SS) the gradient of the curve increases. We note that the inclusion of a domain classifier (DC) reduces the performance at all scales.
230
+
231
+ ![](images/b6705329c2169a0416d90e84e815498b1f0bfb53c6481a403cfb48ec48081d07.jpg)
232
+
233
+ ![](images/35530a358b9d008ad8b061628ac5c0ae9f9d1b58dc0f9a7ef67087c3e74bd327.jpg)
234
+
235
+ ![](images/6e1f6579437a259fb07456dba92adee7672ddce723d1501354e0dc90cdc44b74.jpg)
236
+
237
+ ists across large, medium and small objects. While the performance of MT is initially strong, it rapidly begins to decline; which can be observed in Figure 6. This drop in performance is due to the error propagation of noisy pseudolabels. The experimental results show that Two-Phase Consistency is able to provide a solution. This ensures that highly confident pseudo-labels are bounded by less confident pseudo-label enabling a balance of knowledge into the student.
238
+
239
+ NightAug We benched marked the effectiveness of NightAug in our framework as shown in Table 4. The inclusion of NightAug increases the detection performance of small objects with an increase of $5\%$ . Additionally, the gradient of the training performance remains steep as seen in Figure 6. The positive gradient is displayed most strongly for APm and APs where objects are more prone to nighttime specific complications.
240
+
241
+ Student-Scaling Our final component, student-scaling, is included into the framework and the results can be seen in Table 4. We can observe that student-scaling is able to boost the performance of small object detection by $6\%$ . This boost in performance is due to the student network focusing on smaller object earlier in the training process. We note that the performance of large objects have dropped by $1 - 2\%$ ; however when referring to the training curves in Figure 6, API remains steep. As the initial focus is on smaller objects, less time is allocated to larger objects during training. This can be mitigated by lengthening training resulting in more iterations for larger objects.
242
+
243
+ Domain Classifier To conclude our study, we included a domain classifier into our network. Adversarial learning is a widely used DA technique; however when added into 2PCNet, a performance drop across all scales can be seen. This drop is shown in Table 4. The suppression of nighttime features is suspected to be the cause. Suppression is present as the adversarial loss guides the feature extractor to maintain domain invariance. By suppressing nighttime fea
244
+
245
+ <table><tr><td colspan="4">Methods</td><td colspan="4"></td></tr><tr><td>C</td><td>NA</td><td>SS</td><td>DC</td><td>AP</td><td>API</td><td>APm</td><td>APs</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td></td><td>46.4</td><td>41.7</td><td>25.8</td><td>9.1</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>44.5</td><td>41.6</td><td>25.0</td><td>8.3</td></tr><tr><td>✓</td><td>✓</td><td></td><td></td><td>45.8</td><td>42.2</td><td>25.7</td><td>8.6</td></tr><tr><td>✓</td><td></td><td></td><td></td><td>45.2</td><td>42.9</td><td>25.7</td><td>8.2</td></tr><tr><td></td><td></td><td></td><td></td><td>31.7</td><td>30.4</td><td>16.5</td><td>4.8</td></tr></table>
246
+
247
+ Table 4. Ablation studies on the BDD100K dataset. The last row represents the base Mean-Teacher network. Methods are referred to as, C: Two-Phase Consistency, NA: NightAug, SS: StudentScaling, DC: Domain Classifier. API, APm, and APs represent the AP of large, medium and small objects respectively.
248
+
249
+ tures, the teacher has less information to distil to the student. This is demonstrated in Figure 6 where the domain classifier (dotted purple) initially performs well. But as training continues, our method (solid red) is able to surpass its performance.
250
+
251
+ # 5. Conclusion
252
+
253
+ Our proposed framework, 2PCNet, presents a novel solution to the challenges of day-to-night domain adaptive object detection. With our Two-Phase Consistency approach, we are able to effectively leverage high and low confidence knowledge for the student, while mitigating error propagation commonly present in previous student-teacher methods. We further address issues arising from small scale and dark objects through the use of student-scaling and NightAug, respectively. Experimental results on the e BDD100K [36] and SHIFT [25] datasets demonstrate that 2PCNet outperforms existing state-of-the-art methods. Overall, our proposed framework provides an effective and efficient solution for day-to-night domain adaptive object detection.
254
+
255
+ Acknowledgements This work is partially supported by MOE2019-T2-1-130.
256
+
257
+ # References
258
+
259
+ [1] Qi Cai, Yingwei Pan, Chong-Wah Ngo, Xinmei Tian, Lingyu Duan, and Ting Yao. Exploring object relation in mean teacher for cross-domain detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11449-11458, 2019. 1
260
+ [2] Lin Chen, Huaian Chen, Zhixiang Wei, Xin Jin, Xiao Tan, Yi Jin, and Enhong Chen. Reusing the task-specific classifier as a discriminator: Discriminator-free adversarial domain adaptation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7171-7180, 2022. 2
261
+ [3] Yuhua Chen, Wen Li, Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Domain adaptive faster r-cnn for object detection in the wild. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3339-3348, 2018. 1, 2, 6, 7
262
+ [4] Yuhua Chen, Haoran Wang, Wen Li, Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Scale-aware domain adaptive faster r-cnn. International Journal of Computer Vision, page 2223-2243, 2021. 2
263
+ [5] Dengxin Dai and Luc Van Gool. Dark model adaptation: Semantic image segmentation from daytime to nighttime. In International Conference on Intelligent Transportation Systems (ITSC), pages 3819-3824, 2018. 2, 6
264
+ [6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 248-255, 2009. 6
265
+ [7] Jinhong Deng, Wen Li, Yuhua Chen, and Lixin Duan. Unbiased mean teacher for cross-domain object detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4089-4099, 2021. 1, 2, 4, 5, 6, 7
266
+ [8] Xueqing Deng, Peng Wang, Xiaochen Lian, and Shawn Newsam. NightLab: A Dual-Level Architecture With Hardness Detection for Segmentation at Night. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16938-16948, 2022. 2
267
+ [9] Huan Gao, Jichang Guo, Guoli Wang, and Qian Zhang. Cross-Domain Correlation Distillation for Unsupervised Domain Adaptation in Nighttime Semantic Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9913-9923, 2022. 2
268
+ [10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016. 6
269
+ [11] Mengzhe He, Yali Wang, Jiaxi Wu, Yiru Wang, Hanqing Li, Bo Li, Weihao Gan, Wei Wu, and Yu Qiao. Cross domain object detection by target-perceived dual branch distillation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9560-9570, 2022. 1, 2, 5, 6
270
+ [12] Guoliang Kang, Lu Jiang, Yunchao Wei, Yi Yang, and Alexander G Hauptmann. Contrastive adaptation network for single- and multi-source domain adaptation. IEEE Transactions on Pattern Analysis amp; Machine Intelligence, pages 1793–1804, 2022. 2
271
+
272
+ [13] Guoliang Kang, Lu Jiang, Yi Yang, and Alexander G Hauptmann. Contrastive adaptation network for unsupervised domain adaptation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4888-4897, 2019. 2
273
+ [14] Attila Lengyel, Sourav Garg, Michael Milford, and Jan C. van Gemert. Zero-shot day-night domain adaptation with a physics prior. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 4379-4389, 2021. 2
274
+ [15] Yu-Jhe Li, Xiaoliang Dai, Chih-Yao Ma, Yen-Cheng Liu, Kan Chen, Bichen Wu, Zijian He, Kris Kitani, and Peter Vajda. Cross-domain adaptive teacher for object detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7571-7580, 2022. 1, 2, 4, 5, 6, 7
275
+ [16] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I. Jordan. Learning transferable features with deep adaptation networks. In International Conference on International Conference on Machine Learning, page 97-105, 2015. 2
276
+ [17] Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I. Jordan. Unsupervised domain adaptation with residual transfer networks. In International Conference on Neural Information Processing Systems, page 136-144, 2016. 2
277
+ [18] Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I. Jordan. Deep transfer learning with joint adaptation networks. In International Conference on Machine Learning, page 2208-2217, 2017. 2
278
+ [19] Igor Morawski, Yu-An Chen, Yu-Sheng Lin, and Winston H. Hsu. Nod: Taking a closer look at detection under extreme low-light conditions with night object detection dataset. In British Machine Vision Conference, (BMVC), 2021. 6
279
+ [20] Lukás Neumann, Michelle Karg, Shanshan Zhang, Christian Scharfenberger, Ericiegert, Sarah Mistr, Olga Prokofyeva, Robert Thiel, Andrea Vedaldi, Andrew Zisserman, and Bernt Schiele. Nightowls: A pedestrians at night dataset. In Asian Conference on Computer Vision (ACCV), pages 691-705, 2018. 6
280
+ [21] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In International Conference on Neural Information Processing Systems, page 91-99, 2015. 6
281
+ [22] Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada, and Kate Saenko. Strong-weak distribution alignment for adaptive object detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6949–6958, 2019. 1
282
+ [23] Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Guided curriculum model adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 7373-7382, 2019. 2, 6
283
+ [24] Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Acdc: The adverse conditions dataset with correspondences for semantic driving scene understanding. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 10745-10755, 2021. 6
284
+ [25] Tao Sun, Mattia Segu, Janis Postels, Yuxuan Wang, Luc Van Gool, Bernt Schiele, Federico Tombari, and Fisher Yu. Shift:
285
+
286
+ A synthetic driving dataset for continuous multi-task domain adaptation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 21339-21350, 2022. 6, 8
287
+ [26] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In International Conference on Neural Information Processing Systems, page 1195–1204, 2017. 2, 4
288
+ [27] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2962-2971, 2017. 2
289
+ [28] Sinan Wang, Xinyang Chen, Yunbo Wang, Mingsheng Long, and Jianmin Wang. Progressive adversarial networks for fine-grained domain adaptation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9210-9219, 2020. 2
290
+ [29] Xinyi Wu, Zhenyao Wu, Hao Guo, Lili Ju, and Song Wang. DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime Semantic Segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15769–15778, 2021. 2
291
+ [30] Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V. Le. Self-training with noisy student improves imagenet classification. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10684-10695, 2020. 5
292
+ [31] Chang-Dong Xu, Xingjie Zhao, Xin Jin, and Xiu-Shen Wei. Exploring categorical regularization for domain adaptive object detection. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11721-11730, 2020. 2
293
+ [32] Minghao Xu, Hang Wang, Bingbing Ni, Qi Tian, and Wenjun Zhang. Cross-domain detection via graph-induced prototype alignment. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12352-12361, 2020. 1
294
+ [33] Qi Xu, Yinan Ma, Jing Wu, Chengnian Long, and Xiaolin Huang. CDAda: A Curriculum Domain Adaptation for Nighttime Semantic Segmentation. In IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), pages 2962-2971, 2021. 2
295
+ [34] Yanchao Yang and Stefano Soatto. FDA: Fourier domain adaptation for semantic segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4084-4094, 2020. 6
296
+ [35] Junjie Ye, Changhong Fu, Guangze Zheng, Danda Pani Paudel, and Guang Chen. Unsupervised Domain Adaptation for Nighttime Aerial Tracking. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8896-8905, 2022. 2
297
+ [36] Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, and Trevor Darrell. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2633-2642, 2020. 1, 6, 8
298
+
299
+ [37] Weichen Zhang, Wanli Ouyang, Wen Li, and Dong Xu. Collaborative and adversarial network for unsupervised domain adaptation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3801-3809, 2018. 2
300
+ [38] Ziqiang Zheng, Yang Wu, Xinran Nicole Han, and Jianbo Shi. Forkgan: Seeing into the rainy night. In European Conference on Computer Vision (ECCV), 2020. 6
301
+ [39] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE/CVF International Conference on Computer Vision (ICCV), pages 2242-2251, 2017. 2
2pcnettwophaseconsistencytrainingfordaytonightunsuperviseddomainadaptiveobjectdetection/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e71383dafe9f19e94eedf0d700590a90ddf253988cf003b5fb7b0d09699a6dce
3
+ size 525176
2pcnettwophaseconsistencytrainingfordaytonightunsuperviseddomainadaptiveobjectdetection/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:441a8a30c21f0bb0f60d2b96b52dfea78dd9e0cd361f9908664d93fc8d07a82b
3
+ size 341498
3davatarganbridgingdomainsforpersonalizededitableavatars/ddf7c6ad-f988-4a54-8cf6-7aff7d8dd81c_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:173376d1720f5a4f200655152947b61145d35b4302f182c72ea08b36d8f20822
3
+ size 73298
3davatarganbridgingdomainsforpersonalizededitableavatars/ddf7c6ad-f988-4a54-8cf6-7aff7d8dd81c_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a259c4260542a509f480dbf7ae94394c1b9bceead07e13986b694b231c6d8027
3
+ size 94935
3davatarganbridgingdomainsforpersonalizededitableavatars/ddf7c6ad-f988-4a54-8cf6-7aff7d8dd81c_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b1424fd1b0ce9d96032e6edc3775f412cb2a31eb68e93268d8f39468e0c4f65
3
+ size 7265466
3davatarganbridgingdomainsforpersonalizededitableavatars/full.md ADDED
@@ -0,0 +1,279 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3DAvatarGAN: Bridging Domains for Personalized Editable Avatars
2
+
3
+ Rameen Abdal $^{\dagger 1}$ Hsin-Ying Lee $^{2}$ Peihao Zhu $^{\dagger 1}$ Minglei Chai $^{2}$ Aliaksandr Siarohin $^{2}$
4
+ Peter Wonka $^{1}$ Sergey Tulyakov $^{2}$ $^{1}$ KAUST $^{2}$ Snap Inc.
5
+
6
+ ![](images/df16d596e57fb3925b8f10971a3cd7e1f29201be7f32e2232fd0034c6fca15b4.jpg)
7
+ Figure 1. Editable 3D avatars. We present 3DAvatarGAN, a 3D GAN able to produce and edit personalized 3D avatars from a single photograph (real or generated). Our method distills information from a 2D-GAN trained on 2D artistic datasets like Caricatures, Pixar toons, Cartoons, Comics etc. and requires no camera annotations.
8
+
9
+ # Abstract
10
+
11
+ Modern 3D-GANs synthesize geometry and texture by training on large-scale datasets with a consistent structure. Training such models on stylized, artistic data, with often unknown, highly variable geometry, and camera information has not yet been shown possible. Can we train a 3D GAN on such artistic data, while maintaining multi-view consistency and texture quality? To this end, we propose an adaptation framework, where the source domain is a pre-trained 3D-GAN, while the target domain is a 2D-GAN trained on artistic datasets. We, then, distill the knowledge from a 2D generator to the source 3D generator. To do that, we first propose an optimization-based method to align the distributions of camera parameters across domains. Second, we propose regularizations necessary to learn high-quality texture, while avoiding degenerate geometric solutions, such as flat shapes. Third, we show
12
+
13
+ a deformation-based technique for modeling exaggerated geometry of artistic domains, enabling—as a byproduct—personalized geometric editing. Finally, we propose a novel inversion method for 3D-GANs linking the latent spaces of the source and the target domains. Our contributions—for the first time—allow for the generation, editing, and animation of personalized artistic 3D avatars on artistic datasets. Project Page: https://rameenabdal.github.io/3DAvatarGAN
14
+
15
+ # 1. Introduction
16
+
17
+ Photo-realistic portrait face generation is an iconic application demonstrating the capability of generative models especially GANs [28,30,31]. A recent development has witnessed an advancement from straightforwardly synthesizing 2D images to learning 3D structures without 3D supervision, referred to as 3D-GANs [10,41,55,64]. Such training
18
+
19
+ is feasible with the datasets containing objects with highly consistent geometry, enabling a 3D-GAN to learn a distribution of shapes and textures. In contrast, artistically stylized datasets [25, 65] have arbitrary exaggerations of both geometry and texture, for example, the nose, cheeks, and eyes can be arbitrarily drawn, depending on the style of the artist as well as on the features of the subject, see Fig. 1. Training a 3D-GAN on such data becomes problematic due to the challenge of learning such an arbitrary distribution of geometry and texture. In our experiments (Sec. 5.1), 3D-GANs [10] generate flat geometry and become 2D-GANs essentially. A natural question arises, whether a 3D-GAN can synthesize consistent novel views of images belonging to artistically stylized domains, such as the ones in Fig. 1.
20
+
21
+ In this work, we propose a domain-adaption framework that allows us to answer the question positively. Specifically, we fine-tune a pre-trained 3D-GAN using a 2D-GAN trained on a target domain. Despite being well explored for 2D-GANs [25, 65], existing domain adaptation techniques are not directly applicable to 3D-GANs, due to the nature of 3D data and characteristics of 3D generators.
22
+
23
+ The geometry and texture of stylized 2D datasets can be arbitrarily exaggerated depending on the context, artist, and production requirements. Due to this, no reliable way to estimate camera parameters for each image exists, whether using an off-the-shelf pose detector [72] or a manual labeling effort. To enable the training of 3D-GANs on such challenging datasets, we propose three contributions. ① An optimization-based method to align distributions of camera parameters between domains. ② Texture, depth, and geometry regularizations to avoid degenerate, flat solutions and ensure high visual quality. Furthermore, we redesign the discriminator training to make it compatible with our task. We then propose ③ a Thin Plate Spline (TPS) 3D deformation module operating on a tri-plane representation to allow for certain large and sometimes extreme geometric deformations, which are so typical in artistic domains.
24
+
25
+ The proposed adaptation framework enables the training of 3D-GANs on complex and challenging artistic data. The previous success of domain adaptation in 2D-GANs unleashed a number of exciting applications in the content creation area [25, 65]. Given a single image such methods first find a latent code corresponding to it using GAN inversion, followed by latent editing producing the desired effect in the image space. Compared to 2D-GANs, the latent space of 3D-GANs is more entangled, making it more challenging to link the latent spaces between domains, rendering the existing inversion and editing techniques not directly applicable. Hence, we take a step further and explore the use of our approach to 3D artistic avatar generation and editing. Our final contribution to enable such applications is (4) a new inversion method for coupled 3D-GANs.
26
+
27
+ In summary, the proposed domain-adaption framework
28
+
29
+ allows us to train 3D-GANs on challenging artistic datasets with exaggerated geometry and texture. We call our method 3DAvatarGAN as it—for the first time—offers generation, editing, and animation of personalized stylized, artistic avatars obtained from a single image. Our results (See Sec. 5.2) show the high-quality 3D avatars possible by our method compared to the naive fine-tuning.
30
+
31
+ # 2. Related Work
32
+
33
+ GANs and Semantic Image Editing. Generative adversarial Networks (GANs) [19, 47] are one popular type of generative model, especially for smaller high-quality datasets such as FFHQ [32], AFHQ [14], and LSUN objects [67]. For these datasets, StyleGAN [28, 30, 32] can be considered as the current state-of-the-art GAN [27, 28, 30, 32, 33]. The disentangled latent space learned by StyleGAN has been shown to exhibit semantic properties conducive to semantic image editing [1, 3, 16, 22, 36, 44, 51, 56, 62]. CLIP [46] based image editing [2, 17, 44] and domain transfer [15, 70] are another set of works enabled by StyleGAN.
34
+
35
+ GAN Inversion. Algorithms to project existing images into a GAN latent space are a prerequisite for GAN-based image editing. There are mainly two types of methods to enable such a projection: optimization-based methods [1,13,57,71] and encoder-based methods [5,7,48,58,69]. On top of both streams of methods, the generator weights can be further modified after obtaining initial inversion results [49].
36
+
37
+ Learning 3D-GANs with 2D Data. Previously, some approaches attempt to extract 3D structure from pre-trained 2D-GANs [42, 52]. Recently, inspired by Neural Radiance Field (NeRF) [9, 37, 43, 68], novel GAN architectures have been proposed to combine implicit or explicit 3D representations with neural rendering techniques [11, 12, 20, 39-41, 50, 53, 55, 63, 64]. In our work, we build on EG3D [11] which has current state-of-the-art results for human faces trained on the FFHQ dataset.
38
+
39
+ Avatars and GANs. To generate new results in an artistic domain (e.g. anime or cartoons), a promising technique is to fine-tune an existing GAN pre-trained on photographs, e.g. [45, 54, 60]. Data augmentation and freezing lower layers of the discriminator are useful tools when fine-tuning a 2D-GAN [28, 38]. One branch of methods [18, 44, 70] investigates domain adaptation if only a few examples or only text descriptions are available. While others focus on matching the distribution of artistic datasets with diverse shapes and styles. Our work also falls in this domain. Among previous efforts, StyleCariGAN [25] proposes invertible modules in the generator to train and generate caricatures from real images. DualStyleGAN [65] learns two mapping networks in StyleGAN to control the style and structure of the new domain. Some works are trained on 3D data or require heavy labeling/engineering [21, 26, 66] and use 3D morphable models to map 2D images of carica
40
+
41
+ ![](images/516e7d18529441246367b2a8888fc2a1c37173cbf2bdf5c251b7f98194755e06.jpg)
42
+ Naive Fine-Tuning
43
+ Figure 2. Comparison with naive fine-tuning. Comparison of generated 3D avatars with a naively fine-tuned generator $\mathrm{G}_{\mathrm{base}}$ (left sub-figures) versus our generator $\mathrm{G}_{\mathrm{t}}$ (right sub-figures). The corresponding sub-figures show comparisons in terms of texture quality (top two rows) and geometry (bottom two rows). See Sec. 5.1 for details.
44
+
45
+ ![](images/f9574c015b5d4a9f970cef245e38b45f1237a686dcc4b322c4f7ab095c2acd23.jpg)
46
+ Our Method
47
+
48
+ tures to 3D models. However, such models fail to model the hair, teeth, neck, and clothes and suffer in texture quality. In this work, we are the first to tackle the problem of domain adaption of 3D-GANs and to produce fully controllable 3D Avatars. We employ 2D to 3D domain adaptation and distillation and make use of synthetic 2D data from StyleCariGAN [25] and DualStyleGAN [65].
49
+
50
+ # 3. Domain Adaptation for 3D-GANs
51
+
52
+ The goal of domain adaptation for 3D-GANs is to adapt (both texture and geometry) to a particular style defined by a 2D dataset (Caricature, Anime, Pixar toons, Comic, and Cartoons [24, 25, 65] in our case). In contrast to 2D-StyleGAN-based fine-tuning methods that are conceptually simpler [29, 45], fine-tuning a 3D-GAN on 2D data introduces challenges in addition to domain differences, especially on maintaining the texture quality while preserving the geometry. Moreover, for these datasets, there is no explicit shape and camera information. We define the domain adaptation task as follows: Given a prior 3D-GAN i.e. EG3D $(\mathrm{G_s})$ of source domain $(T_{\mathrm{s}})$ , we aim to produce a 3D Avatar GAN $(\mathrm{G_t})$ of the target domain $(T_{\mathrm{t}})$ while maintaining the semantic, style, and geometric properties of $\mathrm{G_s}$ , and at the same time preserving the identity of the subject between the domains $(T_{\mathrm{s}} \leftrightarrow T_{\mathrm{t}})$ . Refer to Fig. 4 in supplementary for the pipeline figure. We represent $\mathrm{G}_{2\mathrm{D}}$ as a teacher 2D-GAN used for knowledge distillation fine-tuned on the above datasets. Note that as $T_{\mathrm{t}}$ is not assumed to contain camera parameter annotations, the training scheme must suppress artifacts such as low-quality texture under different views and flat geometry (See Fig. 2). In the following, we discuss the details of our method.
53
+
54
+ # 3.1. How to align the cameras?
55
+
56
+ Selecting appropriate ranges for camera parameters is of paramount importance for high-fidelity geometry and texture detail. Typically, such parameters are empirically estimated, directly computed from the dataset using an off-the-shelf pose detector [10], or learned during training [8]. In domains we aim to bridge, such as caricatures for which a 3D model may not even exist, directly estimating the camera distribution is problematic and, hence, is not assumed by our method. Instead, we find it essential to ensure that the camera parameter distribution is consistent across the source and target domains. For the target domain, we use StyleGAN2 trained on FFHQ, fine-tuned on artistic datasets [25, 65]. Assuming that the intrinsic parameters of all the cameras are the same, we aim to match the distribution of extrinsic camera parameters of $G_{\mathrm{s}}$ and $G_{2\mathrm{D}}$ and train our final $G_{\mathrm{t}}$ using it (see illustration in Fig. 2 of the supplementary materials). To this end, we define an optimization-based method to match the sought distributions. The first step is to identify a canonical pose image in $G_{2\mathrm{D}}$ , where the yaw, pitch, and roll parameters are zero. According to Karras et al., [31], the image corresponding to the mean latent code satisfies this property. Let $\theta$ , $\phi$ be the camera Euler angles in a spherical coordinate system, $r$ , $c$ be the radius of the sphere and camera lookat point, and $M$ be a function that converts these parameters into the camera-to-world matrix. Let $I_{\mathrm{s}}(w,\theta ,\phi ,c,r) = G_{\mathrm{s}}(w,M(\theta ,\phi ,c,r))$ and $I_{2\mathrm{D}}(w) = G_{2\mathrm{D}}(w)$ represent an arbitrary image generated by $G_{\mathrm{s}}$ and $G_{2\mathrm{D}}$ , respectively, given the $w$ code variable. Let $k_{\mathrm{d}}$ be the face key-points detected by the detector $K_{\mathrm{d}}$ [72], then
57
+
58
+ $$
59
+ \left(c ^ {\prime}, r ^ {\prime}\right) := \underset {(c, r)} {\arg \min } \mathrm {L} _ {\mathrm {k d}} \left(I _ {\mathrm {s}} \left(w _ {\text {a v g}} ^ {\prime}, 0, 0, c, r\right), I _ {2 \mathrm {D}} \left(w _ {\text {a v g}}\right)\right), \tag {1}
60
+ $$
61
+
62
+ where $\mathrm{L_{kd}}(I_1,I_2) = \| k_{\mathrm{d}}(I_1) - k_{\mathrm{d}}(I_2)\| _1$ and $w_{\mathrm{avg}}$ and $w_{\mathrm{avg}}^{\prime}$ are the mean $w$ latent codes of $\mathrm{G}_{2\mathrm{D}}$ and $\mathrm{G_s}$ , respectively. In our results, $r^\prime$ is determined to be 2.7 and $c^{\prime}$ is approximately [0.0, 0.05, 0.17]. The next step is to determine a safe range of the $\theta$ and $\phi$ parameters. Following prior works, StyleFlow [3] and FreeStyleGAN [35] (see Fig.5 of the paper), we set these parameters as $\theta^{\prime}\in [-0.45,0.45]$ and $\phi^{\prime}\in [-0.35,0.35]$ in radians.
63
+
64
+ # 3.2. What loss functions and regularizers to use?
65
+
66
+ Next, although the camera systems are aligned, the given dataset may not stem from a consistent 3D model, e.g., in the case of caricatures or cartoons. This entics the generator $G_{t}$ to converge to an easier degenerate solution with a flat geometry. Hence, to benefit from the geometric prior of $G_{s}$ , another important step is to design the loss functions and regularizers for a selected set of parameters to update in $G_{t}$ . Next, we discuss these design choices:
67
+
68
+ ![](images/be9123cd168c87d431112c83004391d76550d1a56bad9d921aced8ac8a8ee51f.jpg)
69
+ Figure 3. Domain adaptation. Domain adaptation results of images from source domain $T_{\mathrm{s}}$ (top row in each sub-figure) to target domain $T_{\mathrm{t}}$ . Rows two to five show corresponding 3D avatar results from different viewpoints.
70
+
71
+ ![](images/d715cafb2b6af2c7094e88f5dff4bbf7e8fb8e731426e3ca0f77d84460d8c3a4.jpg)
72
+
73
+ ![](images/6fe6fc94d25497f665769f6248b26f4df3aedbb4db51ea523686a25d8e882e8e.jpg)
74
+
75
+ Loss Functions. To ensure texture quality and diversity, we resort to the adversarial loss used to fine-tune GANs as our main loss function. We use the standard non-saturating loss to train the generator and discriminator networks used in EG3D [11]. We also perform lazy density regularization to ensure consistency of the density values in the final finetuned model $\mathrm{G}_{\mathrm{t}}$ .
76
+
77
+ Texture Regularization. Since the texture can be entangled with the geometry information, determining which layers to update is important. To make use of the fine-style information encoded in later layers, it is essential to update the $tRGB$ layer parameters (outputting tri-plane features) before the neural rendering stage. $tRGB$ are convolutional layers that transform feature maps to 3 channels at each resolution (96 channels in triplanes). Moreover, since the network has to adapt to a color distribution of $T_{t}$ , it is essential to update the decoder (MLP layers) of the neural rendering pipeline as well. Given the EG3D architecture, we also update the super-resolution layer parameters to ensure the coherency between the low-resolution and high-resolution outputs seen by the discriminator D.
78
+
79
+ Geometry Regularization. In order to allow the network to learn the structure distribution of $T_{\mathrm{t}}$ and at the same time ensure properties of $\mathcal{W}$ and $S$ latent spaces are preserved, we update the earlier layers with regularization. This also encourages the latent spaces of $T_{\mathrm{s}}$ and $T_{\mathrm{t}}$ to be easily linked. Essentially, we update the deviation parameter $\Delta s$ from the
80
+
81
+ $s$ activations of the $S$ space [62]. The $s$ activations are predicted by $\mathrm{A}(w)$ , where $\mathrm{A}$ is the learned affine function in EG3D. The $s$ activations scale the kernels of a particular layer. In order to preserve the identity as well as geometry such that the optimization of $\Delta s$ does not deviate too far away from the original domain $T_{\mathrm{s}}$ , we introduce a regularizer given by
82
+
83
+ $$
84
+ R (\Delta s) := \| \Delta s \| _ {1}. \tag {2}
85
+ $$
86
+
87
+ Note that we apply $\mathrm{R}(\Delta s)$ regularization in a lazy manner, i.e., with density regularization. Interestingly, after training, we can interpolate between $s$ and $s + \Delta s$ parameters to interpolate between the geometries of samples in $T_{\mathrm{s}}$ and $T_{\mathrm{t}}$ (See Fig. 5).
88
+
89
+ Depth Regularization. Next, we observe that even though the above design choice produces better geometry for $T_{\mathrm{t}}$ , some samples from $G_{\mathrm{t}}$ can still lead to flatter geometry, and it is hard to detect these cases. We found that the problem is related to the relative depth of the background to the foreground. To circumvent this problem, we use an additional regularization where we encourage the average background depth of $G_{\mathrm{t}}$ to be similar to $G_{\mathrm{s}}$ . Let $S_{\mathrm{b}}$ be a face background segmentation network [34]. We first compute the average background depth of the samples given by $G_{\mathrm{s}}$ . This average depth is given by
90
+
91
+ $$
92
+ a _ {\mathrm {d}} := \frac {1}{M} \sum_ {n = 1} ^ {M} \left(\frac {1}{N _ {n}} \| D _ {n} \odot \mathrm {S} _ {\mathrm {b}} (I _ {n}) \| _ {F} ^ {2}\right). \tag {3}
93
+ $$
94
+
95
+ ![](images/8dff4311d2c0df9713a53db9cc71a323be40d2dafbf0424e7022191e653e146c.jpg)
96
+ Figure 4. 3D avatars from real images. Projection of real images on the 3D avatar generators.
97
+
98
+ Here, $D_{n}$ is the depth map of the image $I_{n}$ sampled from $G_{\mathrm{s}}$ , $\odot$ represents the Hadamard product, $M$ is the number of the sampled images, and $N_{n}$ is the number of background pixels in $I_{n}$ . Finally, regularization is defined as:
99
+
100
+ $$
101
+ \mathrm {R} (D) := \left\| a _ {\mathrm {d}} \cdot J - \left(D _ {\mathrm {t}} \odot \mathrm {S} _ {\mathrm {b}} \left(I _ {\mathrm {t}}\right)\right) \right\| _ {F}, \tag {4}
102
+ $$
103
+
104
+ where $D_{\mathrm{t}}$ is the depth map of the image $I_{\mathrm{t}}$ sampled from $\mathbf{G}_{\mathrm{t}}$ and $J$ is the matrix of ones having the same spatial dimensions as $D_{\mathrm{t}}$ .
105
+
106
+ # 3.3. What discriminator to use?
107
+
108
+ Given that the data in $T_{\mathrm{s}}$ and $T_{\mathrm{t}}$ is not paired and $T_{\mathrm{t}}$ is not assumed to contain camera parameter annotations, the choice of the discriminator (D) used for this task is also a critical design choice. Essentially, we use the unconditional version of the dual discriminator proposed in EG3D, and hence, we do not condition the discriminator on the camera information. As a result, during the training, $G_{\mathrm{t}}$ generates arbitrary images with pose using $\mathrm{M}(\theta', \phi', c', r')$ , and the discriminator discriminates these images using arbitrary images from $T_{\mathrm{t}}$ . We train the discriminator from scratch and in order to adapt $T_{\mathrm{s}} \rightarrow T_{\mathrm{t}}$ , we use the StyleGAN-ADA [28] training scheme and use R1 regularization.
109
+
110
+ # 3.4. How to incorporate larger geometric deformations between domains?
111
+
112
+ While the regularizers are used to limit the geometric changes when adapting from $T_{\mathrm{s}}$ to $T_{\mathrm{t}}$ , modeling large ge
113
+
114
+ ometric deformations, e.g., in the caricature dataset is another challenge. One choice to edit the geometry is to use the properties of tri-plane features learned by EG3D. We start out by analyzing these three planes in $\mathrm{G_s}$ . We observe that the frontal plane encodes most of the information required to render the final image. To quantify this, we sample images and depth maps from $\mathrm{G_s}$ and swap the front and the other planes from two random images. Then we compare the difference in RGB values of the images and the Chamfer distance of the depth maps. While swapping the frontal tri-planes, the final images are completely swapped, and the Chamfer distance changes by $80\sim 90\%$ matching the swapped image depth map. In the case of the other two planes, the RGB image is not much affected and the Chamfer distance of the depth maps is reduced by only $20\sim 30\%$ in most cases.
115
+
116
+ Given the analysis, we focus to manipulate the $2D$ front plane features to learn additional deformation or exaggerations. We learn a TPS (Thin Plate Spline) [61] network on top of the front plane. Our TPS network is conditioned both on the front plane features as well as the $\mathcal{W}$ space to enable multiple transformations. The architecture of the module is similar to the standard StyleGAN2 layer with an MLP appended at the end to predict the control points that transform the features. Hence, as a byproduct, we also enable 3D-geometry editing guided by the learned latent space. We train this module separately after $G_{\mathrm{t}}$ has been trained. We find that joint training is unstable due to exploding gradients arising from the large domain gap between $T_{\mathrm{s}}$ and $T_{\mathrm{t}}$ in the initial stages. Formally, we define this transformation as:
117
+
118
+ $$
119
+ \mathrm {T} (w, f) := \Delta c, \tag {5}
120
+ $$
121
+
122
+ where, $w$ is the latent code, $f$ is the front plane, and $c$ are the control points.
123
+
124
+ Let $c_{\mathrm{I}}$ be the initial control points producing an identity transformation, $(c_{1}, c_{2})$ be the control points corresponding to front planes $(f_{1}, f_{2})$ sampled using $\mathcal{W}$ codes $(w_{1}, w_{2})$ , respectively, and $(c_{1}', c_{2}')$ be points with $(w_{1}, w_{2})$ swapped in the TPS module. To regularize and encourage the module to learn different deformations, we have
125
+
126
+ $$
127
+ \mathrm {R} \left(\mathrm {T} _ {1}\right) := \alpha \sum_ {n = 1} ^ {2} \| c _ {I} - c _ {n} \| _ {1} - \beta \| c _ {1} - c _ {2} \| _ {1} - \sigma \| c _ {1} ^ {\prime} - c _ {2} ^ {\prime} \| _ {1}. \tag {6}
128
+ $$
129
+
130
+ We use initial control point regularization to regularize large deviations in the control points which would otherwise explode. Additionally, to learn extreme exaggerations in $T_{\mathrm{t}}$ and 'in expectation', conform to the target distribution in the dataset, we add an additional loss term. Let $S(I)$ be the soft-argmax output of the face segmentation network [34] given an image $I$ and assuming that $S$ generalizes to caricatures, then
131
+
132
+ $$
133
+ \mathrm {R} \left(\mathrm {T} _ {2}\right) := \| \mathrm {S} \left(\mathrm {G} _ {\mathrm {t}} (w)\right), \mathrm {S} \left(I _ {\mathrm {t}}\right) \| _ {1} \tag {7}
134
+ $$
135
+
136
+ ![](images/b54b53f3e6f87b4fc83c381fce3ac23524a5f98093017d653f4f9af65eb27f18.jpg)
137
+ Figure 5. Interpolation of $\Delta s$ . Geometric deformation using the interpolation of learned $\Delta s$ parameters.
138
+
139
+ Eq. 6, Eq. 7, and adversarial training loss are used to train the $TPS$ module. We adopt gradient clipping to make sure that the training does not diverge. See the illustrations in Fig. 3 and Fig. 4 of the supplementary materials.
140
+
141
+ # 4. Personalized Avatar Generation and Editing
142
+
143
+ Although 3D domain adaptation adapts $T_{\mathrm{s}} \leftrightarrow T_{\mathrm{t}}$ , it is still a challenge to effectively link the latent spaces of $\mathrm{G_s}$ and $\mathrm{G_t}$ to generate personalized 3D avatars using a single photograph as the reference image. Particularly, the challenge arises due to the discrepancy in the coupled latent spaces when dealing with the projection of real photographs on 3D generators. Moreover, one would like to edit and animate these 3D avatars.
144
+
145
+ Projection. The task is to project a real image into the latent space of $\mathrm{G_s}$ , transfer the latent to $\mathrm{G_t}$ , and further optimize it to construct a 3D avatar. First, we use an optimization-based method to find the $w$ code that minimizes the similarity between the generated and the real image in $\mathrm{G_s}$ . To achieve this, the first step is to align the cameras. We follow the steps mentioned in Sec. 3.1 for this step. Next, we use pixel-wise MSE loss and LPIPS loss to project the image into $\mathrm{G_s}$ [1]. Additionally, to preserve the identity of the subject, we use attribute classifiers e.g. caricature dataset [24] provides the coupled attribute information of real images and caricatures. We use such attribute classifier [24,25] in a post-hoc manner as we notice that such networks can affect the texture in the target domain and could degenerate to narrow style outputs if applied during training. Moreover, such networks may not be available for all target domains. To avoid overfitting into $\mathrm{G_s}$ and encourage the easier transfer of the optimized latent code to $\mathrm{G_t}$ , we use $\mathcal{W}$ space optimization for this step. Finally, we initialize this $w$ code for $\mathrm{G_t}$ and use additional attribute classifier loss [25] for $T_{\mathrm{t}}$ domain along with Depth regularization $\mathrm{R}(D)$ (Eq. 4). As an approximation, we assume that attribute classifier [24, 25] generalizes across all domains. We use $\mathcal{W} / \mathcal{W} +$ space optimization to control the quality and diversity of the outputs. See Algorithm 1 in supplementary for the description.
146
+
147
+ Editing and Animation. Since our 3D domain adaptation is designed to preserve the properties of $\mathcal{W}$ and $S$ spaces, we can perform semantic edits via InterFaceGAN [51],
148
+
149
+ GANSpace [22], StyleSpace [62] etc., and geometric edits using TPS (Sec. 3.4) and $\Delta s$ interpolation (Sec. 3.2). To perform video editing, we design an encoder for EG3D based on $e4e$ [58] to encode videos and transfer the edits from $\mathrm{G_s}$ to $\mathrm{G_t}$ based on the $w$ codes [4,6,59]. We leave a more fine-grained approach for video processing as future work.
150
+
151
+ # 5. Results
152
+
153
+ # 5.1. Quantitative Results
154
+
155
+ In this section, we consider three important evaluations to verify the quality of the texture, geometry, and identity preservation in the new domain using the Caricature, Cartoons, and Pixar toons datasets. We evaluate the ablation of our design choices in the supplementary materials. In the evaluation, let $\mathrm{G}_{\mathrm{base}}$ be the baseline naive fine-tuning method which is trained with all the parameters using the losses in EG3D fine-tuned from FFHQ trained prior $\mathrm{G}_{\mathrm{s}}$ . Note here we still align the cameras in $\mathrm{G}_{\mathrm{base}}$ using the method defined in Sec. 3.1 and use adaptive discriminator [28] with R1 regularization for a fair comparison.
156
+
157
+ Texture Quality. To verify the quality of the texture, diversity of samples as well as to some extent, the geometry in the target domain $T_{\mathrm{t}}$ , we compare the FID [23] scores using $\mathrm{G}_{\mathrm{base}}$ and $\mathrm{G}_{\mathrm{t}}$ in Table 1. Note that in the case of Caricatures, we report two scores i.e. with and without using the attribute classifier loss in the training as discussed in Sec. 4. Notice that our method outperforms the naive baseline method by a huge margin in some cases, especially in Caricatures and Cartoons. We attribute these differences to the mode collapse prone training of $\mathrm{G}_{\mathrm{base}}$ which is correlated with flat geometry degenerate solution. We show visual results of the flat geometries learned by $\mathrm{G}_{\mathrm{base}}$ and comparison in Fig. 2.
158
+
159
+ Geometric Quality. To quantify the flat geometries, in Table 2, we show three scores that help us understand such degenerate solutions. Here we consider coupled depth maps generated from sampling in the domains $T_{\mathrm{s}}$ ( $\mathrm{G_s}$ ) and $T_{\mathrm{t}}$ ( $\mathrm{G_t}$ and $\mathrm{G}_{\mathrm{base}}$ ). First, we compute the expectation of the absolute mean differences ( $M_{\mathrm{d}}$ ) of the corresponding foreground depth maps sampled from $T_{\mathrm{s}}$ and $T_{\mathrm{t}}$ . We also compute the expectation of the absolute standard deviation differences ( $S_{\mathrm{d}}$ ) for the same setting. Here, we assume that the flatter geometries have a large difference in the depth maps as compared to the prior as indicated by $M_{\mathrm{d}}$ . Moreover, $S_{\mathrm{d}}$ computes the distance in the distribution of the depth values, where a larger difference indicates a narrow distribution, and hence a flatter geometry. We also notice that the flat geometry is correlated with the generator learning diverse poses when images are rendered under standard canonical camera parameters i.e. $\mathrm{M}(0,0,c,r)$ . We hypothesize in the case of the flatter geometries, the model learns to pose in
160
+
161
+ ![](images/81649d86574bde316f083bf9578b5c3b43f0db2eb221bbdfe805561260412bd4.jpg)
162
+ Figure 6. Deformations using TPS. Geometric edits using our proposed TPS (Thin Plate Spline) module learned on the frontal tri-plane features. Each sub-figure shows a 3D avatar and three examples of TPS deformations sampled from the learned 3D deformation space.
163
+
164
+ Table 1. FID Computation. FID (Frechet Inception Distance) between the 2D dataset and the samples generated by the fine-tuned 3D GAN using baseline $(\mathrm{G_{base}})$ and Ours $(\mathrm{G_t})$ . $^{\prime \prime}*$ represents the score with the inclusion of the attribute classifier loss discussed in Sec. 3.2.
165
+
166
+ <table><tr><td>Method</td><td>Caricatures</td><td>Cartoons</td><td>Pixar Toons</td></tr><tr><td>Gbase</td><td>67.8</td><td>79.0</td><td>15.1</td></tr><tr><td>Gt(Ours)</td><td>19.4/20.2*</td><td>12.8</td><td>12.4</td></tr></table>
167
+
168
+ Table 2. Geometry Evaluation. Comparing the geometry using baseline method $(\mathrm{G}_{\mathrm{base}})$ and Ours $(\mathrm{G}_{\mathrm{t}})$ . For the definition of $M_{\mathrm{d}}$ , $S_{\mathrm{d}}$ and $\mathrm{R}(\mathrm{T}_2)$ , refer to Sec. 5.1.
169
+
170
+ <table><tr><td>Metric</td><td>Method</td><td>Caricatures</td><td>Cartoons</td><td>Pixar</td></tr><tr><td rowspan="2">Md ↓</td><td>Gbase</td><td>0.47</td><td>0.21</td><td>0.29</td></tr><tr><td>Gt (Ours)</td><td>0.21</td><td>0.13</td><td>0.13</td></tr><tr><td rowspan="2">Sd ↓</td><td>Gbase</td><td>0.22</td><td>0.14</td><td>0.15</td></tr><tr><td>Gt (Ours)</td><td>0.15</td><td>0.10</td><td>0.09</td></tr><tr><td rowspan="2">R(T2) ↓</td><td>Gbase</td><td>2.99</td><td>3.39</td><td>4.01</td></tr><tr><td>Gt (Ours)</td><td>2.27</td><td>1.62</td><td>1.56</td></tr></table>
171
+
172
+ Table 3. Identity Preservation. Identity preservation using baseline $(\mathrm{G}_{\mathrm{base}})$ and Ours $(\mathrm{G}_{\mathrm{t}})$ .
173
+
174
+ <table><tr><td>Method</td><td>Caricatures</td><td>Cartoons</td><td>Pixel Toons</td></tr><tr><td>Gbase</td><td>1.28</td><td>0.92</td><td>0.85</td></tr><tr><td>Gt(Ours)</td><td>0.87</td><td>0.81</td><td>0.73</td></tr></table>
175
+
176
+ formation in the earlier layers instead of being camera view-dependent. To quantify this, since pose information may not be available for some domains (e.g. cartoons), we compute the $\mathrm{R}(\mathrm{T}_2)$ scores between corresponding images in the domain $T_{\mathrm{s}}$ ( $\mathrm{G}_{\mathrm{s}}$ ) and $T_{\mathrm{t}}$ ( $\mathrm{G}_{\mathrm{t}}$ and $\mathrm{G}_{\mathrm{base}}$ ). Note that these scores are computed without the TPS module. Our scores are lower in all three metrics, hence, validating that our method avoids the degenerate solution and preserves the geometric distribution of the prior. For discussion on the TPS module and ablations refer to the supplementary materials.
177
+
178
+ Identity Preservation. Identity preservation score is another important evaluation to check the quality of latent space linking between $\mathrm{G_s}$ and $\mathrm{G_t}$ . In Table 3, we compute the attribute loss (BCE loss) between the domains $T_{\mathrm{s}}$ and $T_{\mathrm{t}}$ using the attribute classifiers [24, 25]. Note that our method
179
+
180
+ ![](images/ffcb362871dc7ba10eb393f7d821dc10ff50e964ee7a1b5456194ee10c11b4a4.jpg)
181
+ Figure 7. Local edits. Local edits performed on the 3D avatars using the $S$ space.
182
+
183
+ is able to preserve the identity better across the domains.
184
+
185
+ # 5.2. Qualitative Results
186
+
187
+ For qualitative results, we show the results of the domain adaptation, as well as the personalized edits (geometric and semantic), performed on the resultant 3D avatars. First, in order to show the quality of domain adaptation, identity preservation, and geometric consistency, in Fig. 3, we show results from $\mathrm{G}_{\mathrm{s}}$ and corresponding results from 3D avatar generator $\mathrm{G}_{\mathrm{t}}$ trained on Caricatures, Pixar toons, Cartoons, and Comic domains. Next, in order to show that the method generalizes to real images, we use the method described in
188
+
189
+ ![](images/b1f8079284bc58efc40c07e2aaf6929d07f1e41440083f4cf8015ce8654f1cec.jpg)
190
+ Figure 8. 3D avatar animation. Animation of 3D avatars generated using a driving video encoded in source domain $T_{s}$ and applied to samples in target domain $T_{t}$ . The top row shows the driving video and the subsequent rows show generated animations using a random Caricature or Pixar toon. The head pose is changed in each frame of the generated animation to show 3D consistency.
191
+
192
+ Sec. 4 to project and transfer the latent code from $\mathrm{G_s}$ to $\mathrm{G_t}$ to produce the 3D avatars. In Fig. 4, we show our results of real to 3D avatar transfer. Notice the quality both in terms of texture as well as geometry for both these results achieved by our method. Next, we show geometric and semantic edits possible to produce personalized 3D avatars:
193
+
194
+ Geometry Edits. We show two types of geometric edits i.e. $\Delta s$ interpolation (Sec. 3.2) and deformation using $TPS$ (Sec. 3.4). First, in Fig. 5, we show the geometry interpolation by interpolating between original $s$ activations of $\mathrm{G_s}$ and learned $\Delta s$ parameters. In Fig. 6, we show some additional exaggerations in caricatures using the learned 3D deformation latent space of $TPS$ module.
195
+
196
+ Semantic Edits and Animation. Since in our method, we encourage the latent regularization to preserve the properties of the latent space learned by the $\mathrm{G}_{\mathrm{s}}$ generator, in Fig. 7 we show $S$ space edits performed on the 3D avatars. Notice the quality of edits in terms of locality and adaptability. Additionally, we can edit semantics like hair as opposed to 3D morphable model based methods. In Fig. 8, thanks to the latent space semantics preservation ensured by our method, we can perform some video edits to create a coherent animation based on the difference of $w$ codes of video encoded in $\mathrm{G}_{\mathrm{s}}$ (Sec. 4) and applied to layers $7 - 10$ in $\mathrm{G}_{\mathrm{t}}$ . Notice the quality of expressions, identity preservation, and 3D consistency across each identity in each row.
197
+
198
+ # 6. Conclusion
199
+
200
+ We tackled two open research problems in this paper. In the first part, we proposed the first domain adaptation method for 3D-GANs to the best of our knowledge. This part yields two linked EG3D generators, one in the photorealistic source domain of faces, and another EG3D generator in an artistic target domain. As possible target domains, we show results for cartoons, caricatures, and Comics. In the second part, we built on domain adaptation to create 3D avatars in an artistic domain that can be edited and animated. Our framework consists of multiple technical components introduced in this paper. First, we propose a technique for camera space estimation for artistic domains. Second, we introduce a set of regularizers and loss functions that can regularize the fine-tuning of EG3D in such a way that enough of the 3D structure and geometry of the original model is kept, while the distinguishing attributes of the artistic domain, such as textures and colors and local geometric deformations can still be learned. Third, we introduce a geometric deformation module that can reintroduce larger geometric deformations in a controlled manner. These larger geometric deformations can interact and cooperate with EG3D so that semantic edits are still possible. Finally, we propose an embedding algorithm that is especially suitable for two linked EG3D generator networks.
201
+
202
+ # References
203
+
204
+ [1] Rameen Abdul, Yipeng Qin, and Peter Wonka. Image2stylegan: How to embed images into the stylegan latent space? In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4432-4441, Seoul, Korea, 2019. IEEE. 2, 6
205
+ [2] Rameen Abdal, Peihao Zhu, John Femiani, Niloy Mitra, and Peter Wonka. Clip2stylegan: Unsupervised extraction of stylegan edit directions. In ACM SIGGRAPH 2022 Conference Proceedings, SIGGRAPH '22, New York, NY, USA, 2022. Association for Computing Machinery. 2
206
+ [3] Rameen Abdal, Peihao Zhu, Niloy J. Mitra, and Peter Wonka. Styleflow: Attribute-conditioned exploration of stylegan-generated images using conditional continuous normalizing flows. ACM Trans. Graph., 40(3), may 2021. 2, 3
207
+ [4] Rameen Abdal, Peihao Zhu, Niloy J. Mitra, and Peter Wonka. Video2stylegan: Disentangling local and global variations in a video, 2022. 6
208
+ [5] Yuval Alaluf, Or Patashnik, and Daniel Cohen-Or. Restyle: A residual-based stylegan encoder via iterative refinement. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021. 2
209
+ [6] Yuval Alaluf, Or Patashnik, Zongze Wu, Asif Zamir, Eli Shechtman, Dani Lischinski, and Daniel Cohen-Or. Third time's the charm? image and video editing with stylegan3. CoRR, abs/2201.13433, 2022. 6
210
+ [7] Yuval Alaluf, Omer Tov, Ron Mokady, Rimon Gal, and Amit H. Bermano. Hyperstyle: Stylegan inversion with hypernetworks for real image editing. CoRR, abs/2111.15666, 2021. 2
211
+ [8] Anonymous. 3d generation on imagenet. In Open Review, 2023. 3
212
+ [9] Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5855–5864, 2021. 2
213
+ [10] Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. Efficient geometry-aware 3D generative adversarial networks. In arXiv, 2021. 1, 2, 3
214
+ [11] Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. Efficient geometry-aware 3d generative adversarial networks, 2021. 2, 4
215
+ [12] Eric R Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5799-5809, 2021. 2
216
+ [13] Yen-Chi Cheng, Chieh Hubert Lin, Hsin-Ying Lee, Jian Ren, Sergey Tulyakov, and Ming-Hsuan Yang. Inout: Diverse image outpainting via gan inversion. In IEEE Conference on Computer Vision and Pattern Recognition, 2022. 2
217
+
218
+ [14] Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2020. 2
219
+ [15] Min Jin Chong and David A. Forsyth. Jojogan: One shot face stylization. CoRR, abs/2112.11641, 2021. 2
220
+ [16] Min Jin Chong, Hsin-Ying Lee, and David Forsyth. Stylegan of all trades: Image manipulation with only pretrained stylegan. arXiv preprint arXiv:2111.01619, 2021. 2
221
+ [17] Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, and Daniel Cohen-Or. Stylegan-nada: Clip-guided domain adaptation of image generators. arXiv preprint arXiv:2108.00946, 2021. 2
222
+ [18] Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, and Daniel Cohen-Or. Stylegan-nada: Clip-guided domain adaptation of image generators, 2021. 2
223
+ [19] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks, 2014. 2
224
+ [20] Jiatao Gu, Lingjie Liu, Peng Wang, and Christian Theobalt. Stylererf: A style-based 3d aware generator for high-resolution image synthesis. In International Conference on Learning Representations, 2022. 2
225
+ [21] Fangzhou Han, Shuquan Ye, Mingming He, Menglei Chai, and Jing Liao. Exemplar-based 3d portrait stylization. IEEE Transactions on Visualization and Computer Graphics, 2021. 2
226
+ [22] Erik Härkönen, Aaron Hertzmann, Jaakko Lehtinen, and Sylvain Paris. Ganspace: Discovering interpretable gan controls. arXiv preprint arXiv:2004.02546, 2020. 2, 6
227
+ [23] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. 6
228
+ [24] Jing Huo, Wenbin Li, Yinghuan Shi, Yang Gao, and Hujun Yin. Webcaricature: a benchmark for caricature recognition. In British Machine Vision Conference, 2018. 3, 6, 7
229
+ [25] Wonjong Jang, Gwangjin Ju, Yucheol Jung, Jiaolong Yang, Xin Tong, and Seungyong Lee. Stylecarigan: Caricature generation via stylegan feature map modulation. 40(4), 2021. 2, 3, 6, 7
230
+ [26] Yucheol Jung, Wonjong Jang, Soongjin Kim, Jiaolong Yang, Xin Tong, and Seungyong Lee. Deep deformable 3d caricatures with learned shape control. In Special Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings. ACM, aug 2022. 2
231
+ [27] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation, 2017. 2
232
+ [28] Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Training generative adversarial networks with limited data. In Proc. NeurIPS, 2020. 1, 2, 5, 6
233
+ [29] Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Training generative adversarial networks with limited data. arXiv preprint arXiv:2006.06676, 2020.3
234
+
235
+ [30] Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Alias-free generative adversarial networks, 2021. 1, 2
236
+ [31] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4401-4410, 2019. 1, 3
237
+ [32] Tero Karras, Samuli Laine, and Timo Aila. A Style-Based generator architecture for generative adversarial networks. IEEE transactions on pattern analysis and machine intelligence, 43(12):4217-4228, Dec. 2021. 2
238
+ [33] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of StyleGAN. In Proc. CVPR, 2020. 2
239
+ [34] Cheng-Han Lee, Ziwei Liu, Lingyun Wu, and Ping Luo. Maskgan: Towards diverse and interactive facial image manipulation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 4, 5
240
+ [35] Thomas Leimkuhler and George Drettakis. Freestylegan: Free-view editable portrait rendering with the camera manifold. 40(6), 2021. 3
241
+ [36] Chieh Hubert Lin, Hsin-Ying Lee, Yen-Chi Cheng, Sergey Tulyakov, and Ming-Hsuan Yang. Infinitygan: Towards infinite-pixel image synthesis. In International Conference on Learning Representations (ICLR), 2022. 2
242
+ [37] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In European conference on computer vision, pages 405-421. Springer, 2020. 2
243
+ [38] Sangwoo Mo, Minsu Cho, and Jinwoo Shin. Freeze the discriminator: a simple baseline for fine-tuning gans, 2020. 2
244
+ [39] Michael Niemeyer and Andreas Geiger. Campari: Camera-aware decomposed generative neural radiance fields. In 2021 International Conference on 3D Vision (3DV), pages 951-961. IEEE, 2021. 2
245
+ [40] Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11453-11464, 2021. 2
246
+ [41] Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher-Shlizerman. StyleSDF: High-Resolution 3D-Consistent Image and Geometry Generation. arXiv preprint arXiv:2112.11427, 2021. 1, 2
247
+ [42] Xingang Pan, Bo Dai, Ziwei Liu, Chen Change Loy, and Ping Luo. Do 2d gans know 3d shape? unsupervised 3d shape reconstruction from 2d image gans. arXiv preprint arXiv:2011.00844, 2020. 2
248
+ [43] Keunhong Park, Utkarsh Sinha, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Steven M Seitz, and Ricardo Martin-Brualla. Deformable neural radiance fields. arXiv preprint arXiv:2011.12948, 2020. 2
249
+ [44] Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, and Dani Lischinski. Styleclip: Text-driven manipulation of stylegan imagery, 2021. 2
250
+
251
+ [45] Justin N. M. Pinkney and Doron Adler. Resolution dependent gan interpolation for controllable image synthesis between domains, 2020. 2, 3
252
+ [46] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. CoRR, abs/2103.00020, 2021. 2
253
+ [47] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks, 2015. 2
254
+ [48] Elad Richardson, Yuval Alaluf, Or Patashnik, Yotam Nitzan, Yaniv Azar, Stav Shapiro, and Daniel Cohen-Or. Encoding in style: a stylegan encoder for image-to-image translation. arXiv preprint arXiv:2008.00951, 2020. 2
255
+ [49] Daniel Roich, Ron Mokady, Amit H Bermano, and Daniel Cohen-Or. Pivotal tuning for latent-based editing of real images. arXiv preprint arXiv:2106.05744, 2021. 2
256
+ [50] Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. Graf: Generative radiance fields for 3d-aware image synthesis. In Advances in Neural Information Processing Systems (NeurIPS), 2020. 2
257
+ [51] Yujun Shen, Ceyuan Yang, Xiaou Tang, and Bolei Zhou. Interfacegan: Interpreting the disentangled face representation learned by gans. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. 2, 6
258
+ [52] Yichun Shi, Divyansh Aggarwal, and Anil K Jain. Lifting 2d stylegan for 3d-aware face generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6258-6266, 2021. 2
259
+ [53] Ivan Skorokhodov, Aliaksandr Siarohin, Yinghao Xu, Jian Ren, Hsin-Ying Lee, Peter Wonka, and Sergey Tulyakov. 3d generation on imagenet. In International Conference on Learning Representations (ICLR), 2023. 2
260
+ [54] Guoxian Song, Linjie Luo, Jing Liu, Wan-Chun Ma, Chunpong Lai, Chuanxia Zheng, and Tat-Jen Cham. Agilegan: Stylizing portraits by inversion-consistent transfer learning. ACM Trans. Graph., 40(4), jul 2021. 2
261
+ [55] Jingxiang Sun, Xuan Wang, Yichun Shi, Lizhen Wang, Jue Wang, and Yebin Liu. Ide-3d: Interactive disentangled editing for high-resolution 3d-aware portrait synthesis, 2022. 1, 2
262
+ [56] Ayush Tewari, Mohamed Elgharib, Gaurav Bharaj, Florian Bernard, Hans-Peter Seidel, Patrick Pérez, Michael Zollhofer, and Christian Theobalt. Stylerig: Rigging stylegan for 3d control over portrait images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6142-6151, 2020. 2
263
+ [57] Ayush Tewari, Mohamed Elgharib, Mallikarjun BR, Florian Bernard, Hans-Peter Seidel, Patrick Pérez, Michael Zöllhofer, and Christian Theobalt. Pie: Portrait image embedding for semantic control. volume 39, December 2020. 2
264
+ [58] Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, and Daniel Cohen-Or. Designing an encoder for stylegan image manipulation. arXiv preprint arXiv:2102.02766, 2021. 2, 6
265
+
266
+ [59] Rotem Tzaban, Ron Mokady, Rinon Gal, Amit H. Bermano, and Daniel Cohen-Or. *Stitch it in time: Gan-based facial editing of real videos. CoRR*, abs/2201.08361, 2022. 6
267
+ [60] Can Wang, Menglei Chai, Mingming He, Dongdong Chen, and Jing Liao. Cross-domain and disentangled face manipulation with 3d guidance. IEEE Transactions on Visualization and Computer Graphics, 2022. 2
268
+ [61] WarBean. tps-stn-pytorch. https://github.com/WarBean/tps_stn_pytorch.5
269
+ [62] Zongze Wu, Dani Lischinski, and Eli Shechtman. Stylespace analysis: Disentangled controls for stylegan image generation. arXiv preprint arXiv:2011.12799, 2020. 2, 4, 6
270
+ [63] Yinghao Xu, Menglei Chai, Zifan Shi, Sida Peng, Ivan Skorokhodov, Aliaksandr Siarohin, Ceyuan Yang, Yujun Shen, Hsin-Ying Lee, Bolei Zhou, et al. Discoscene: Spatially disentangled generative radiance fields for controllable 3d-aware scene synthesis. In IEEE Conference on Computer Vision and Pattern Recognition, 2023. 2
271
+ [64] Yinghao Xu, Sida Peng, Ceyuan Yang, Yujun Shen, and Bolei Zhou. 3d-aware image synthesis via learning structural and textural representations. arXiv preprint arXiv:2112.10759, 2021. 1, 2
272
+ [65] Shuai Yang, Liming Jiang, Ziwei Liu, and Chen Change Loy. Pastiche master: Exemplar-based high-resolution portrait style transfer. In CVPR, 2022. 2, 3
273
+ [66] Zipeng Ye, Mengfei Xia, Yanan Sun, Ran Yi, Minjing Yu, Juyong Zhang, Yu-Kun Lai, and Yong-Jin Liu. 3d-CariGAN: An end-to-end solution to 3d caricature generation from normal face photos. IEEE Transactions on Visualization and Computer Graphics, pages 1-1, 2021. 2
274
+ [67] Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015. 2
275
+ [68] Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492, 2020. 2
276
+ [69] Jiapeng Zhu, Yujun Shen, Deli Zhao, and Bolei Zhou. Indomain gan inversion for real image editing. In European Conference on Computer Vision, pages 592-608. Springer, 2020. 2
277
+ [70] Peihao Zhu, Rameen Abdal, John Femiani, and Peter Wonka. Mind the gap: Domain gap control for single shot domain adaptation for generative adversarial networks. In International Conference on Learning Representations, 2022. 2
278
+ [71] Peihao Zhu, Rameen Abdal, Yipeng Qin, John Femiani, and Peter Wonka. Improved stylegan embedding: Where are the good latents?, 2020. 2
279
+ [72] zllrunning. face-parsing.pytorch. https://github.com/zllrunning/face-parsing.PyTorch.2,3
3davatarganbridgingdomainsforpersonalizededitableavatars/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:512c664682df61a96a9ea32397a567bc1dd417b8bd8a58ab5552bbb49974d4c8
3
+ size 862848
3davatarganbridgingdomainsforpersonalizededitableavatars/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3484ef2a63e4b61ab49bf79209d89f2de25cf8670f8402e328d45929880cfbbb
3
+ size 483146
3dawareconditionalimagesynthesis/b9625555-02d4-4da7-b507-7cd64cc67a00_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec584b910b617ab64bb4fdaadc5ad7821781533984855e30f46b73f34a7b0438
3
+ size 86952
3dawareconditionalimagesynthesis/b9625555-02d4-4da7-b507-7cd64cc67a00_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e793cddb4d4e8e03a492881432fc52cc55b688bc27d290c83b69f1d882002504
3
+ size 113770
3dawareconditionalimagesynthesis/b9625555-02d4-4da7-b507-7cd64cc67a00_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a288dffa8e6f9266d89fe3ebfbeee7b16bcc97fd5bb182c75486fd36cb2d9dc3
3
+ size 9865346
3dawareconditionalimagesynthesis/full.md ADDED
@@ -0,0 +1,382 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3D-aware Conditional Image Synthesis
2
+
3
+ Kangle Deng Gengshan Yang Deva Ramanan Jun-Yan Zhu Carnegie Mellon University
4
+
5
+ ![](images/d0db08c534775dbb6eca15e32dcc39d29d6901cc09dc50bce140e3be6216c11a.jpg)
6
+ Figure 1. Given a 2D label map as input, such as a segmentation or edge map, our model learns to predict high-quality 3D labels, geometry, and appearance, which enables us to render both labels and RGB images from different viewpoints. The inferred 3D labels further allow interactive editing of label maps from any viewpoint, as shown in Figure 10.
7
+
8
+ # Abstract
9
+
10
+ We propose pix2pix3D, a 3D-aware conditional generative model for controllable photorealistic image synthesis. Given a 2D label map, such as a segmentation or edge map, our model learns to synthesize a corresponding image from different viewpoints. To enable explicit 3D user control, we extend conditional generative models with neural radiance fields. Given widely-available posed monocular image and label map pairs, our model learns to assign a label to every 3D point in addition to color and density, which enables it to render the image and pixel-aligned label map simultaneously. Finally, we build an interactive system that allows users to edit the label map from different viewpoints and generate outputs accordingly.
11
+
12
+ # 1. Introduction
13
+
14
+ Content creation with generative models has witnessed tremendous progress in recent years, enabling high-quality,
15
+
16
+ user-controllable image and video synthesis [19, 20, 24, 34]. In particular, image-to-image translation methods [29, 56, 84] allow users to interactively create and manipulate a high-resolution image given a 2D input label map. Unfortunately, existing image-to-image translation methods operate purely in 2D, without explicit reasoning of the underlying 3D structure of the content. As shown in Figure 1, we aim to make conditional image synthesis 3D-aware, allowing not only 3D content generation but also viewpoint manipulation and attribute editing (e.g., car shape) in 3D.
17
+
18
+ Synthesizing 3D content conditioned on user input is challenging. For model training, it is costly to obtain large-scale datasets with paired user inputs and their desired 3D outputs. During test time, 3D content creation often requires multi-view user inputs, as a user may want to specify the details of 3D objects using 2D interfaces from different viewpoints. However, these inputs may not be 3D-consistent, providing conflicting signals for 3D content creation.
19
+
20
+ To address the above challenges, we extend conditional generative models with 3D neural scene representations. To
21
+
22
+ enable cross-view editing, we additionally encode semantic information in 3D, which can then be rendered as 2D label maps from different viewpoints. We learn the aforementioned 3D representation using only 2D supervision in the form of image reconstruction and adversarial losses. While the reconstruction loss ensures the alignment between 2D user inputs and corresponding 3D content, our pixel-aligned conditional discriminator encourages the appearance and labels to look plausible while remaining pixel-aligned when rendered into novel viewpoints. We also propose a cross-view consistency loss to enforce the latent codes to be consistent from different viewpoints.
23
+
24
+ We focus on 3D-aware semantic image synthesis on the CelebAMask-HQ [38], AFHQ-cat [16], and shapenetcar [10] datasets. Our method works well for various 2D user inputs, including segmentation maps and edge maps. Our method outperforms several 2D and 3D baselines, such as Pix2NeRF variants [6], SofGAN [11], and SEAN [87]. We further ablate the impact of various design choices and demonstrate applications of our method, such as cross-view editing and explicit user control over semantics and style. Please see our website for more results and code. Please check out the full version of our paper at arXiv.
25
+
26
+ # 2. Related Work
27
+
28
+ Neural Implicit Representation. Neural implicit fields, such as DeepSDF and NeRFs [46, 54], model the appearance of objects and scenes with an implicitly defined, continuous 3D representation parameterized by neural networks. They have produced significant results for 3D reconstruction [67, 88] and novel view synthesis applications [39, 43, 44, 48, 80] thanks to their compactness and expressiveness. NeRF and its descendants aim to optimize a network for an individual scene, given hundreds of images from multiple viewpoints. Recent works further reduce the number of training views through learning network initializations [13, 70, 78], leveraging auxiliary supervision [18, 30], or imposing regularization terms [50]. Recently, explicit or hybrid representations of radiance fields [12, 48, 61] have also shown promising results regarding quality and speed. In our work, we use hybrid representations for modeling both user inputs and outputs in 3D, focusing on synthesizing novel images rather than reconstructing an existing scene. A recent work Pix2NeRF [6] aims to translate a single image to a neural radiance field, which allows single-image novel view synthesis. In contrast, we focus on 3D-aware user-controlled content generation.
29
+
30
+ Conditional GANs. Generative adversarial networks (GANs) learn the distribution of natural images by forcing the generated and real images to be indistinguishable. They have demonstrated high-quality results on 2D image synthesis and manipulation [1, 3, 5, 20, 33-35, 59, 65, 72, 82, 83].
31
+
32
+ Several methods adopt image-conditional GANs [29, 47] for user-guided image synthesis and editing applications [26, 27, 38, 40, 55, 56, 62, 73, 84, 87]. In contrast, we propose a 3D-aware generative model conditioned on 2D user inputs that can render view-consistent images and enable interactive 3D editing. Recently, SoFGAN [11] uses a 3D semantic map generator and a 2D semantic-to-image generator to enable 3D-aware generation, but using 2D generators does not ensure 3D consistency.
33
+
34
+ 3D-aware Image Synthesis. Early data-driven 3D image editing systems can achieve various 3D effects but often require a huge amount of manual effort [14, 37]. Recent works have integrated the 3D structure into learning-based image generation pipelines using various geometric representations, including voxels [22,86], voxelized 3D features [49], and 3D morphable models [71, 77]. However, many rely on external 3D data [71, 77, 86]. Recently, neural scene representations have been integrated into GANs to enable 3D-aware image synthesis [8,9,21,51-53,64,76]. Intriguingly, these 3D-aware GANs can learn 3D structures without any 3D supervision. For example, StyleNeRF [21] and EG3D [8] learn to generate 3D representations by modulating either NeRFs or explicit representations with latent style vectors. This allows them to render high-resolution view-consistent images. Unlike the above methods, we focus on conditional synthesis and interactive editing rather than random sampling. Several works [17,28,42,75] have explored sketch-based shape generation but they do not allow realistic image synthesis.
35
+
36
+ Closely related to our work, Huang et al. [25] propose synthesizing novel views conditional on a semantic map. Our work differs in three ways. First, we can predict full 3D labels, geometry, and appearance, rather than only 2D views, which enables cross-view editing. Second, our method can synthesize images with a much wider baseline than Huang et al. [25]. Finally, our learning algorithm does not require ground truth multi-view images of the same scene. Two recent works, FENeRF [69] and 3DSGAN [79], also leverage semantic labels for training 3D-aware GANs, but they do not support conditional inputs and require additional efforts (e.g., GAN-inversion) to allow user editing. Three concurrent works, IDE-3D [68], NeRFFaceEditing [31], and sem2nerf [15], also explore the task of 3D-aware generation based on segmentation masks. However, IDE-3D and sem2nerf only allow editing on a fixed view, and NeRF-FaceEditing focuses on real image editing rather than generation. All of them do not include results for other input modalities. In contrast, we present a general-purpose method that works well for diverse datasets and input controls.
37
+
38
+ # 3. Method
39
+
40
+ Given a 2D label map $\mathbf{I}_{\mathbf{s}}$ , such as a segmentation or edge map, pix2pix3D generates a 3D-volumetric representation of geometry, appearance, and labels that can be rendered
41
+
42
+ ![](images/c1c686b869a2e8159fc2727cb9533b35544f9d7cbf22f1705c03b3f7bd14fb26.jpg)
43
+ Figure 2. Overall pipeline. Given a 2D label map (e.g., segmentation map), a random latent code $z$ , and a camera pose $\hat{P}$ as inputs, our generator renders the label map and image from viewpoint $\hat{P}$ . Intuitively, the input label map specifies the geometric structure, while the latent code captures the appearance, such as hair color. We begin with an encoder that encodes both the input label map and the latent code into style vectors $\mathbf{w}^{+}$ . We then use $\mathbf{w}^{+}$ to modulate our 3D representation, which takes a spatial point $\mathbf{x}$ and outputs (1) color $\mathbf{c} \in \mathbb{R}^3$ , (2) density $\sigma$ , (3) feature $\phi \in \mathbb{R}^l$ , and (4) label $\mathbf{s} \in \mathbb{R}^c$ . We then perform volumetric rendering and 2D upsampling to get the high-resolution label map $\hat{\mathbf{I}}_{\mathbf{s}}^{+}$ and RGB Image $\hat{\mathbf{I}}_{\mathbf{c}}^{+}$ . For those rendered from ground-truth poses, we compare them to ground-truth labels and images with an LPIPS loss and label reconstruction loss. We apply a GAN loss on labels and images rendered from both novel and original viewpoints.
44
+
45
+ from different viewpoints. Figure 2 provides an overview. We first introduce the formulation of our 3D conditional generative model for 3D-aware image synthesis in Section 3.1. Then, in Section 3.2, we discuss how to learn the model from color and label map pairs $\{\mathbf{I}_{\mathrm{c}},\mathbf{I}_{\mathrm{s}}\}$ associated with poses $\mathbf{P}$ .
46
+
47
+ # 3.1. Conditional 3D Generative Models
48
+
49
+ Similar to EG3D [8], we adopt a hybrid representation for the density and appearance of a scene and use style vectors to modulate the 3D generations. To condition the 3D representations on 2D label map inputs, we introduce a conditional encoder that maps a 2D label map into a latent style vector. Additionally, pix2pix3D produces 3D labels that can be rendered from different viewpoints, allowing for cross-view user editing.
50
+
51
+ Conditional Encoder. Given a 2D label map input $\mathbf{I}_{\mathrm{s}}$ and a random latent code sampled from the spherical Gaussian space $\mathbf{z} \sim \mathcal{N}(0, I)$ , our conditional encoder $E$ outputs a list of style vectors $\mathbf{w}^{+} \in \mathbb{R}^{l \times 256}$ ,
52
+
53
+ $$
54
+ \mathbf {w} ^ {+} = E (\mathbf {I _ {s}}, \mathbf {z}),
55
+ $$
56
+
57
+ where $l = 13$ is the number of layers to be modulated.
58
+
59
+ Specifically, we encode $\mathbf{I}_{\mathrm{s}}$ into the first 7 style vectors that represent the global geometric information of the scene. We then feed the random latent code $\mathbf{z}$ through a Multi-Layer Perceptron (MLP) mapping network to obtain the rest of the style vectors that control the appearance.
60
+
61
+ Conditional 3D Representation. Our 3D representation is parameterized by tri-planes followed by an 2-layer MLP $f$ [8], which takes in a spatial point $\mathbf{x} \in \mathbb{R}^3$ and returns 4 types of outputs: (1) color $\mathbf{c} \in \mathbb{R}^3$ , (2) density $\sigma \in \mathbb{R}^+$ , (3) feature $\phi \in \mathbb{R}^{64}$ for the purpose of 2D upsampling, and most notably, (4) label $\mathbf{s} \in \mathbb{R}^c$ , where $c$ is the number of classes if $\mathbf{I_s}$ is a segmentation map, otherwise 1 for edge labels. We make the field conditional by modulating the generation of tri-planes $F^{\mathrm{tri}}$ with the style vectors $\mathbf{w}^+$ . We also remove the view dependence of the color following [8, 21]. Formally,
62
+
63
+ $$
64
+ (\mathbf {c}, \mathbf {s}, \sigma , \phi) = f (F _ {\mathbf {w} ^ {+}} ^ {\mathrm {t r i}} (\mathbf {x})).
65
+ $$
66
+
67
+ Volume Rendering and Upsampling. We apply volumetric rendering to synthesize color images [32, 46]. In addition, we render label maps, which are crucial for enabling cross-view editing (Section 4.3) and improving rendering quality
68
+
69
+ (Table 1). Given a viewpoint $\hat{P}$ looking at the scene origin, we sample $N$ points along the ray that emanates from a pixel location and query density, color, labels, and feature information from our 3D representation. Let $\mathbf{x_i}$ be the i-th sampled point along the ray $r$ . Let $\mathbf{c}_i, \mathbf{s}_i$ and $\phi_i$ be the color, labels, and the features of $\mathbf{x_i}$ . Similar to [69], The color, label map, and feature images are computed as the weighted combination of queried values,
70
+
71
+ $$
72
+ \hat {\mathbf {I}} _ {\mathbf {c}} (r) = \sum_ {i = 1} ^ {N} \tau_ {i} \mathbf {c} _ {i}, \quad \hat {\mathbf {I}} _ {\mathbf {s}} (r) = \sum_ {i = 1} ^ {N} \tau_ {i} \mathbf {s} _ {i}, \quad \hat {\mathbf {I}} _ {\phi} (r) = \sum_ {i = 1} ^ {N} \tau_ {i} \phi_ {i}, \tag {1}
73
+ $$
74
+
75
+ where the transmittance $\tau_{i}$ is computed as the probability of a photon traversing between the camera center and the i-th point given the length of the i-th interval $\delta_{i}$ ,
76
+
77
+ $$
78
+ \tau_ {i} = \prod_ {j = 1} ^ {i} \exp \left(- \sigma_ {j} \delta_ {j}\right) (1 - \exp \left(- \sigma_ {i} \delta_ {i}\right)).
79
+ $$
80
+
81
+ Similar to prior works [8, 21, 52], we approximate Equation 1 by 2D Upsampler $U$ to reduce the computational cost. We render high-res $512 \times 512$ images in two passes. In the first pass, we render low-res $64 \times 64$ images $\hat{\mathbf{I}}_{\mathbf{c}}, \hat{\mathbf{I}}_{\mathbf{s}}, \hat{\mathbf{I}}_{\phi}$ . Then a CNN up-sampler $U$ is applied to obtain high-res images,
82
+
83
+ $$
84
+ \hat {\mathbf {I}} _ {\mathbf {c}} ^ {+} = U (\hat {\mathbf {I}} _ {\mathbf {c}}, \hat {\mathbf {I}} _ {\phi}), \qquad \hat {\mathbf {I}} _ {\mathbf {s}} ^ {+} = U (\hat {\mathbf {I}} _ {\mathbf {s}}, \hat {\mathbf {I}} _ {\phi}).
85
+ $$
86
+
87
+ # 3.2. Learning Objective
88
+
89
+ Learning conditional 3D representations from monocular images is challenging due to its under-constrained nature. Given training data of associated images, label maps, and camera poses predicted by an off-the-shelf model, we carefully construct learning objectives, including reconstruction, adversarial, and cross-view consistency losses. These objectives will be described below.
90
+
91
+ Reconstruction Loss. Given a ground-truth viewpoint $\mathbf{P}$ associated with the color and label maps $\{\mathbf{I}_{\mathbf{c}}, \mathbf{I}_{\mathbf{s}}\}$ , we render color and label maps from $\mathbf{P}$ and compute reconstruction losses for both high-res and low-res output. We use LPIPS [81] to compute the image reconstruction loss $\mathcal{L}_c$ for color images. For label reconstruction loss $\mathcal{L}_s$ , we use the balanced cross-entropy loss for segmentation maps or L2 Loss for edge maps,
92
+
93
+ $$
94
+ \mathcal {L} _ {\text {r e c o n}} = \lambda_ {c} \mathcal {L} _ {c} \left(\mathbf {I} _ {\mathbf {c}}, \left\{\hat {\mathbf {I}} _ {\mathbf {c}}, \hat {\mathbf {I}} _ {\mathbf {c}} ^ {+} \right\}\right) + \lambda_ {s} \mathcal {L} _ {s} \left(\mathbf {I} _ {\mathbf {s}}, \left\{\hat {\mathbf {I}} _ {\mathbf {s}}, \hat {\mathbf {I}} _ {\mathbf {s}} ^ {+} \right\}\right),
95
+ $$
96
+
97
+ where $\lambda_{c}$ and $\lambda_{s}$ balance two terms.
98
+
99
+ Pixel-aligned Conditional Discriminator. The reconstruction loss alone fails to synthesize detailed results from novel viewpoints. Therefore, we use an adversarial loss [20] to enforce renderings to look realistic from random viewpoints. Specifically, we have two discriminators $D_{\mathbf{c}}$ and $D_{\mathbf{s}}$ for RGB images and label maps, respectively. $D_{\mathbf{c}}$ is a widely-used
100
+
101
+ ![](images/988411ec3b4d606f33e07441af6e5a4bd16111801d61dbb690142bd8ed4bbd8a.jpg)
102
+ Multi-view Generation of Seg Maps
103
+ Figure 3. Cross-View Consistency Loss. Given an input label map $\mathbf{I}_{\mathbf{s}}$ and its associated pose $\mathbf{P}$ , we first infer the geometry latent code $\mathbf{w}_{\mathbf{g}}$ . From $\mathbf{w}_{\mathbf{g}}$ , we can generate a label map $\hat{\mathbf{I}}_{\mathbf{s}}$ from the same pose $\mathbf{P}$ , and $\hat{\mathbf{I}}_{\mathbf{s}}'$ from a random pose $\mathbf{P}'$ . Next, we infer $\mathbf{w}_{\mathbf{g}}'$ from the novel view $\hat{\mathbf{I}}_{\mathbf{s}}'$ , and render it back to the original pose $\mathbf{P}$ to obtain $\hat{\mathbf{I}}_{\mathbf{s}}''$ . Finally, we add a reconstruction loss: $\mathcal{L}_{\mathrm{CVC}} = \lambda_{\mathrm{CVC}}\mathcal{L}_s(\hat{\mathbf{I}}_{\mathbf{s}}'', \hat{\mathbf{I}}_{\mathbf{s}})$ .
104
+
105
+ GAN loss that takes real and fake images as input, while the pixel-aligned conditional discriminator $D_{\mathbf{s}}$ concatenates color images and label maps as input, which encourages pixel alignment between color images and label maps. Notably, in $D_{\mathbf{s}}$ , we stop the gradients for the color images to prevent a potential quality downgrade. We also feed the rendered low-res images to prevent the upsampler from hallucinating details, inconsistent with the low-res output. The adversarial loss can be written as follows.
106
+
107
+ $$
108
+ \mathcal {L} _ {\mathrm {G A N}} = \lambda_ {D _ {\mathbf {c}}} \mathcal {L} _ {D _ {\mathbf {c}}} (\hat {\mathbf {I}} _ {\mathbf {c}} ^ {+}, \hat {\mathbf {I}} _ {\mathbf {c}}) + \lambda_ {D _ {\mathbf {s}}} \mathcal {L} _ {D _ {\mathbf {s}}} (\hat {\mathbf {I}} _ {\mathbf {c}} ^ {+}, \hat {\mathbf {I}} _ {\mathbf {c}}, \hat {\mathbf {I}} _ {\mathbf {s}} ^ {+}, \hat {\mathbf {I}} _ {\mathbf {s}}).
109
+ $$
110
+
111
+ where $\lambda_{D_{\mathrm{c}}}$ and $\lambda_{D_{\mathrm{s}}}$ balance two terms. To stabilize the GAN training, we adopt the R1 regularization loss [45].
112
+
113
+ Cross-view Consistency Loss. We observe that inputting label maps of the same object from different viewpoints will sometimes result in different 3D shapes. Therefore we add a cross-view consistency loss to regularize the training, as illustrated in Figure 3. Given an input label map $\mathbf{I}_{\mathbf{s}}$ and its associated pose $\mathbf{P}$ , we generate the label map $\hat{\mathbf{I}}_{\mathbf{s}}^{\prime}$ from a different viewpoint $\mathbf{P}^{\prime}$ , and render the label map $\hat{\mathbf{I}}_{\mathbf{s}}^{\prime \prime}$ back to the pose $\mathbf{P}$ using $\hat{\mathbf{I}}_{\mathbf{s}}^{\prime}$ as input. We add a reconstruction loss between $\hat{\mathbf{I}}_{\mathbf{s}}^{\prime \prime}$ and $\hat{\mathbf{I}}_{\mathbf{s}}$ :
114
+
115
+ $$
116
+ \mathcal {L} _ {\mathrm {C V C}} = \lambda_ {\mathrm {C V C}} \mathcal {L} _ {s} (\hat {\mathbf {I}} _ {\mathbf {s}} ^ {\prime \prime}, \hat {\mathbf {I}} _ {\mathbf {s}}),
117
+ $$
118
+
119
+ where $\mathcal{L}_s$ denotes the reconstruction loss in the label space, and $\lambda_{\mathrm{CVC}}$ weights the loss term. This loss is crucial for reducing error accumulation during cross-view editing.
120
+
121
+ Optimization. Our final learning objective is written as follows:
122
+
123
+ $$
124
+ \mathcal {L} _ {\text {t o t a l}} = \mathcal {L} _ {\text {r e c o n}} + \mathcal {L} _ {\text {G A N}} + \mathcal {L} _ {\text {C V C}}.
125
+ $$
126
+
127
+ At every iteration, we determine whether to use a ground-truth pose or sample a random one with a probability of $p$ . We use the reconstruction loss and GAN loss for ground-truth poses, while for random poses, we only use the GAN loss. We provide the hyper-parameters and more implementation details in the appendix of our arXiv version.
128
+
129
+ ![](images/6524c121ce230d05cd41feb9f75bed487b059bf5a675aa6a54d2e8ffe2d5a69d.jpg)
130
+ Input Seg Map
131
+
132
+ ![](images/962e55fa0d7fee6b63d4c7ba2d40169c247bab59d0ed8d5aa78cd86a0ce374a6.jpg)
133
+ Ours
134
+
135
+ ![](images/492c45d44d7ba6d8169e4d0c645e6a8da2d7374cd6ea95cdbfd93ce49e14e693.jpg)
136
+ Pix2NeRF
137
+ Figure 4. Qualitative Comparison with Pix2NeRF [6], SoFGAN [11], and SEAN [87] on CelebAMask dataset for seg2face task. SEAN fails in multi-view synthesis, while SoFGAN suffers from multi-view inconsistency (e.g., face identity changes across viewpoints). Our method renders high-quality images while maintaining multi-view consistency. Please check our website for more examples.
138
+
139
+ ![](images/25d1c57f903489de4ac9d4c8d90385d68150f583be4b4f372a55bccb3970eaff.jpg)
140
+ SoFGAN
141
+
142
+ ![](images/446644698823e9c07c72d457e262b4557f2b556ad8bda9432911171a4ac03a66.jpg)
143
+ SEAN
144
+
145
+ ![](images/73d0e8dcc36b189f63f1ea7b19c085511cee534a07c251b442368c72542146b3.jpg)
146
+ Input Seg Map
147
+
148
+ ![](images/fa82f049d6800f9a4b8d6e0160c5b23261b5fbb18c367726987cf682cf2ab95d.jpg)
149
+ Ours
150
+
151
+ ![](images/2667b3ac10d618c8c04aee838c4b19b38a78be508b30b10c8f6b052b31387a13.jpg)
152
+ w/o 3D Labels
153
+
154
+ ![](images/1286d0a864f24b9a995cdaf2bfb3b7b1c4ed3425e1e932af73f41d5580cda404.jpg)
155
+
156
+ ![](images/293baefdf04c37c560ceb68f7e44c4547c13fcb023a6b1f32d5d0724828710b5.jpg)
157
+
158
+ ![](images/0754bdc1996513a0f381d6645fa35e4f747966a743de55433aa9c6cf0cd03390.jpg)
159
+
160
+ ![](images/df3d8e6c4c75d7110ef47a2f48f699d9fcf88ae9f5a6cf9ffbdfcdcc3d299080.jpg)
161
+ Figure 5. Qualitative ablation on seg2face and seg2cat. We ablate our method by removing the branch that renders label maps (w/o 3D Labels). Our results better align with input labels (e.g., hairlines and the cat's ear).
162
+
163
+ ![](images/763d78b11909cd56b602c8cb563bda80d1ea09fa760c6ba2976ce6dc62826ded.jpg)
164
+
165
+ ![](images/e03c8222f29076d852d5ad90fbeeb9406e2a18927e788482a6e3617d19ad05bf.jpg)
166
+
167
+ ![](images/689b58efb2e7efef39d2c744f5934ead645dc516fc6f60e473d266da6db4b071.jpg)
168
+ Input Edge Map
169
+
170
+ ![](images/3d3c8a105e49a2338d37b272fe6e18044354958d74ec02ee690fe6d523e90e42.jpg)
171
+ Rendered RGB images & edge maps
172
+ GT View
173
+ Figure 6. Results on edge2cat. Our model is trained on AFHQcat [16] with edges extracted by pidinet [66].
174
+
175
+ # 4. Experiment
176
+
177
+ We first introduce the datasets and evaluation metrics. Then we compare our method with the baselines. Finally, we demonstrate cross-view editing and multi-modal synthesis applications enabled by our method.
178
+
179
+ Datasets. We consider four tasks: seg2face, seg2cat,
180
+
181
+ ![](images/611b3a3f67c0d28c059363a578b0e5558d3d974a5ff4a53806747d05ed6cb5fa.jpg)
182
+ Input Seg Map
183
+
184
+ ![](images/da8d379482d15b6744c8d2fee408dc38176a00236fdc0e02e612b6f55aa221d2.jpg)
185
+ Ours
186
+
187
+ ![](images/c3837de7e313c28c101c3a24263b10dbcd65e874fee7bd8a9466700be6d15ec3.jpg)
188
+ w/o 3D Labels
189
+
190
+ ![](images/075e8f080fffcde2a09c9e911fb4c114a3fb1e5e0c56c746265e0731c1264b44.jpg)
191
+
192
+ ![](images/654619b953a1a632b636e8147a0aaf0328730b9e93a0ac2f876bb36d04e60bdc.jpg)
193
+
194
+ ![](images/7ad193f647b58e706d59eba116ba5f003f6a531c1c23e3860a17c6c55c3b1890.jpg)
195
+
196
+ ![](images/2dc61c293039e792708ef518ce6744dd1404d7395c9f1e736c40b54c5d212cb9.jpg)
197
+
198
+ ![](images/a81b6d3de8edacada11ba680be9da78a60da8c73063b4a6670252dddd8b0b201.jpg)
199
+
200
+ ![](images/332e8ae0fe3aaa5a4391ce7bcee64fcc21e0f83f2fbda3ae1f8e1a3214739393.jpg)
201
+
202
+ ![](images/31b1df148d6c6de23c999ffade5f1dbd5fb4fc80aaa0116bd7e8a51219eec901.jpg)
203
+ Input Edge Map
204
+
205
+ ![](images/0e1af3b81c89ef415836333616082c06bb5a4f1e64c77537ccebfc922c49bfb7.jpg)
206
+ Figure 7. Qualitative comparisons on edge2car. pix2pix3D (Ours) and Pix2NeRF [6] are trained on shapenet-car [10], and pix2pix3D achieves better quality and alignment than Pix2NeRF.
207
+
208
+ edge2cat, and edge2car in our experiments. For seg2face, we use CelebAMask-HQ [38] for evaluation. CelebAMask-HQ contains 30,000 high-resolution face images from CelebA [41], and each image has a facial part segmentation mask and a predicted pose. The segmentation masks contain 19 classes, including skin, eyebrows, ears, mouth, lip, etc. The pose associated with each image segmentation is predicted by HopeNet [60]. We split the CelebAMask-HQ dataset into
209
+
210
+ <table><tr><td rowspan="2">Seg2Face</td><td colspan="3">QUALITY</td><td colspan="3">ALIGNMENT</td></tr><tr><td></td><td colspan="2">SG</td><td></td><td></td><td rowspan="2">FVV Identity ↓</td></tr><tr><td>CELEBAMASK [38]</td><td>FID ↓</td><td>KID ↓</td><td>Diversity ↑</td><td>mIoU ↑</td><td>acc ↑</td></tr><tr><td>SEAN [87]</td><td>32.74</td><td>0.018</td><td>0.29</td><td>0.52</td><td>0.85</td><td>N/A</td></tr><tr><td>SoFGAN [11]</td><td>23.34</td><td>0.012</td><td>0.33</td><td>0.53</td><td>0.89</td><td>0.58</td></tr><tr><td>PIX2NERF [6]</td><td>54.23</td><td>0.042</td><td>0.16</td><td>0.36</td><td>0.65</td><td>0.44</td></tr><tr><td colspan="7">PIX2PIX3D (OURS)</td></tr><tr><td>W/O 3D LABELS</td><td>12.96</td><td>0.005</td><td>0.30</td><td>N/A (0.43)</td><td>N/A (0.81)</td><td>0.38</td></tr><tr><td>W/O CVC</td><td>11.62</td><td>0.004</td><td>0.30</td><td>0.50 (0.50)</td><td>0.87 (0.85)</td><td>0.42</td></tr><tr><td>FULL MODEL</td><td>11.54</td><td>0.003</td><td>0.28</td><td>0.51 (0.52)</td><td>0.90 (0.88)</td><td>0.36</td></tr><tr><td>FULL MODEL†</td><td>11.13</td><td>0.003</td><td>0.29</td><td>0.51 (0.50)</td><td>0.90 (0.87)</td><td>0.36</td></tr></table>
211
+
212
+ Table 1. Seg2face Evaluation. Our metrics include image quality (FID, KID, SG Diversity), alignment (mIoU and acc against GT label maps), and multi-view consistency (FVV Identity). Single-generation diversity (SG Diversity) is obtained by computing the LPIPS metric between randomly generated pairs given a single conditional input. To evaluate alignment, we compare the generated label maps against the ground truth in terms of mIoU and pixel accuracy (acc). Alternatively, given a generated image, one could estimate label maps via a face parser, and compare those against the ground truth (numbers in parentheses). We include SEAN [87] and SoFGAN [11] as baselines, and modify Pix2NeRF [6] to take conditional input. Our method achieves the best quality, alignment ACC, and FVV Identity while being competitive on SG Diversity. SoFGAN tends to have better alignment but worse 3D consistency. We also ablate our method w.r.t the 3D labels and the cross-view consistency (CVC) loss. Our 3D labels are crucial for alignment, while the CVC loss improves multi-view consistency. Using pretrained models from EG3D $(\dagger)$ also improves the performance.
213
+
214
+ <table><tr><td rowspan="2">Edge2Car</td><td colspan="3">QUALITY</td><td>ALIGNMENT</td></tr><tr><td>FID ↓</td><td>KID ↓</td><td>SG Diversity ↑</td><td>AP ↑</td></tr><tr><td>PIX2NERF [6]</td><td>23.42</td><td>0.014</td><td>0.06</td><td>0.28</td></tr><tr><td colspan="5">PIX2PIX3D (OURS)</td></tr><tr><td>w/o 3D LABELS</td><td>10.73</td><td>0.005</td><td>0.12</td><td>0.45 (0.42)</td></tr><tr><td>w/o CVC</td><td>9.42</td><td>0.004</td><td>0.13</td><td>0.61 (0.59)</td></tr><tr><td>FULL MODEL</td><td>8.31</td><td>0.004</td><td>0.13</td><td>0.63 (0.59)</td></tr></table>
215
+
216
+ a training set of 24,183, a validation set of 2,993, and a test set of 2,824, following the original work [38]. For seg2cat and edge2cat, we use AFHQ-cat [16], which contains 5,065 images at $512 \times$ resolution. We estimate the viewpoints using unsup3d [74]. We extract the edges using pidinet [66] and obtain segmentation by clustering DINO features [2] into 6 classes. For edge2car, we use 3D models from shapenet-
217
+
218
+ Table 2. Edge2car Evaluation. We compare our method with Pix2NeRF [6] on edge2car using the shapenet-car [10] dataset. Similar to Table 1, we evaluate FID, KID, and SG Diversity for image quality. We also evaluate the alignment with the input edge map using AP. Similarly, we can either run informative drawing [7] on generated images to obtain edge maps (numbers in parentheses) or directly use generated edge maps to calculate the metrics. We achieve better image quality and alignment than Pix2NeRF. We also find that using 3D labels and cross-view consistency loss is helpful regarding FID and AP metrics.
219
+
220
+ <table><tr><td>Seg2Cat</td><td rowspan="2">FID ↓</td><td colspan="2">QUALITY</td><td colspan="2">ALIGNMENT</td></tr><tr><td>AFHQ-CAT [34]</td><td>KID ↓</td><td>SG Diversity ↑</td><td>mIoU ↑</td><td>acc ↑</td></tr><tr><td>PIX2NERF [6]</td><td>43.92</td><td>0.081</td><td>0.15</td><td>0.27</td><td>0.58</td></tr><tr><td>OURS</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>w/o 3D LABELS</td><td>10.41</td><td>0.004</td><td>0.26</td><td>N/A (0.49)</td><td>N/A (0.69)</td></tr><tr><td>w/o CVC</td><td>9.64</td><td>0.004</td><td>0.26</td><td>0.66 (0.63)</td><td>0.76 (0.73)</td></tr><tr><td>FULL MODEL</td><td>8.62</td><td>0.003</td><td>0.27</td><td>0.66 (0.62)</td><td>0.78 (0.73)</td></tr></table>
221
+
222
+ Table 3. Seg2cat Evaluation. We compare our method with Pix2NeRF [6] on Seg2Cat using AFHQ-cat dataset [16], with segmentation obtained by clustering DINO features [2]. Similar to Table 1, we evaluate the image quality and alignment. Ours performs better in all metrics.
223
+
224
+ ![](images/5b3d08623c8b1ae15837e2929eac3d8a625b8e094d1d60e27bc8bf860a30031c.jpg)
225
+ Figure 8. Semantic Mesh. We show semantic meshes of human and cat faces from marching cubes colored by 3D labels.
226
+
227
+ car [10] and render 500,000 images at $128 \times$ resolution for training, and 30,000 for evaluation. We extract the edges using informative drawing [7]. We train our model at $512 \times$ resolution except for $128 \times$ in the edge2car task.
228
+
229
+ Running Time. For training the model at $512 \times$ resolution, it takes about three days on eight RTX 3090 GPUs. But we can significantly reduce the training time to 4 hours if we initialize parts of our model with pretrained weights from EG3D [8]. During inference, our model takes $10\mathrm{ms}$ to obtain the style vector, and another $30\mathrm{ms}$ to render the final image and the label map on a single RTX A5000. The low latency (25 FPS) allows for interactive user editing.
230
+
231
+ # 4.1. Evaluation metrics
232
+
233
+ We evaluate the models from two aspects: 1) the image quality regarding fidelity and diversity, and 2) the alignment between input label maps and generated outputs.
234
+
235
+ Quality Metrics. Following prior works [21, 57], we use the clean-fid library [58] to compute Fréchet Inception Distance (FID) [23] and Kernel Inception Distance (KID) [4] to measure the distribution distance between synthesized results and real images. We also evaluate the single-generation diversity (SG Diversity) by calculating the LPIPS metric between randomly generated pairs given a single input following prior works [11, 85]. For FID and KID, we generate
236
+
237
+ ![](images/ddcc366d3be846fb4986f5b23ba0240c5f84a3fb77a866926f6ae612b9a99c3e.jpg)
238
+ Figure 9. We study the effect of random pose sampling probability $p$ during training. Without random poses ( $p = 0$ ), the model achieves the best alignment with input semantic maps, with reduced image quality. In contrast, only using random poses ( $p = 1$ ) achieves the best image quality, while results fail to align with input maps. We find $p = 0.5$ balances the image quality and input alignment.
239
+
240
+ ![](images/86a570277fc93bf09c5b134d5bfdbc4c976c22b004fdaeb0c62af6db371f5793.jpg)
241
+
242
+ ![](images/be042cfffb6bb81a1708257c1eb218309bcad47386f9707ff233a8fd004df1b2.jpg)
243
+
244
+ ![](images/24569639882bfc6cc5f28e3701610500a4d7632303263fe8a02bc4e6d00ac557.jpg)
245
+
246
+ ![](images/8d4b5c7c30f547f27dc1f0dffe40615b6cedbbd5014ad3204a95d10f266d6b63.jpg)
247
+ Figure 10. Cross-view Editing of Edge2Car. Our 3D editing system allows users to edit label maps from any viewpoint instead of only the input view. Importantly, our feed-forward encoder allows fast inference of the latent code without GAN-inversion. Typically, a single forward pass of rendering takes only $40\mathrm{ms}$ on a single RTX A5000, which enables interactive editing. Please check our demo video on our website.
248
+
249
+ 10 images per label map in the test set using randomly sampled $z$ . We compare our generated images with the whole dataset, including training and test images.
250
+
251
+ Alignment Metrics. We evaluate models on the test set using mean Intersection-over-Union (mIoU) and pixel accuracy (acc) for segmentation maps following existing works [57, 63], and average precision (AP) for edge maps. For those models that render label maps as output, we directly compare them with ground-truth labels. Otherwise, we first predict the label maps from the output RGB images using off-the-shelf networks [38, 66], and then compare the prediction with the ground truth. The metrics regarding such predicted semantic maps are reported within brackets in Table 1 and Table 2.
252
+
253
+ For seg2face, we evaluate the preservation of facial identity from different viewpoints (FVV Identity) by calculating their distances with the dlib face recognition algorithm*.
254
+
255
+ # 4.2. Baseline comparison
256
+
257
+ Baselines. Since there are no prior works on conditional 3D-aware image synthesis, we make minimum modifications to Pix2NeRF [6] to be conditional on label maps instead of images. For a thorough comparison, we introduce several baselines: SEAN [87] and SoFGAN [11]. 2D baselines like SEAN [87] cannot generate multi-view images by design (N/A for FVV Identity), while SoFGAN [11] uses an unconditional 3D semantic map generator before the 2D
258
+
259
+ generator so we can evaluate FVV Identity for that.
260
+
261
+ Results. Figure 4 shows the qualitative comparison for seg2face and Table 1 reports the evaluation results. SoFGAN [11] tends to produce results with slightly better alignment but worse 3D consistency for its 2D RGB generator. Our method achieves the best quality, alignment acc, and FVV Identity while being competitive with 2D baselines on SG diversity. Figure 5 shows the qualitative ablation on seg2face and seg2cat. Table ?? reports the metrics for seg2cat. Figure 6 shows the example results for edge2cat. Figure 7 shows the qualitative comparison for edge2car and Table 2 reports the metrics. Our method achieves the best image quality and alignment. Figure 8 shows semantic meshes of human and cat faces, extracted by marching cubes and colored by our learned 3D labels. We provide more evaluation results in the appendix of our arXiv version.
262
+
263
+ Ablation Study. We compare our full method to several variants. Specifically, (1) w/o 3D LABELS, we remove the branch of rendering label maps from our method, and (2) w/o CVC, we remove the cross-view consistency loss. From Table 1, Table 2, and Figure 5, rendering label maps is crucial for the alignment with the input. We posit that the joint learning of appearance, geometry, and label information poses strong constraints on correspondence between the input label maps and the 3D representation. Thus our method can synthesize images pixel-aligned with the inputs. Our CVC loss helps preserve the facial identity from different viewpoints.
264
+
265
+ ![](images/701762258dbb14addbc873741842e7690298e349830108143d07aff390405b8c.jpg)
266
+ Figure 11. Multi-modal Synthesis. The leftmost column is the input segmentation map. We use the same segmentation map for each row. We generate multi-modal results by randomly sampling an appearance style for each column.
267
+
268
+ Analysis on random sampling of poses. We study the effect of the different probabilities of sampling random poses during training, as shown in Figure 9. When sampling no random poses $(p = 0)$ , the model best aligns with input label maps with suboptimal image quality. Conversely, only sampling random poses $(p = 1)$ gives the best image quality but suffers huge misalignment with input label maps. We find $p = 0.5$ achieves the balance between the image quality and the alignment with the input.
269
+
270
+ # 4.3. Applications
271
+
272
+ Cross-view Editing. As shown in Figure 10, our 3D editing system allows users to generate and edit label maps from any viewpoint instead of only the input view. The edited label map is further fed into the conditional encoder to update the 3D representation. Unlike GAN inversion [83], our feedforward conditional encoder allows fast inference of the latent code. Thus, a single forward pass of our full model takes only $40\mathrm{ms}$ on a single RTX A5000.
273
+
274
+ Multi-modal synthesis and interpolation. Like other style-based generative models [8, 21, 34, 36], our method can disentangle the geometry and appearance information. Specifically, the input label map captures the geometry information while the randomly sampled latent code controls the appearance. We show style manipulation results in Figure 11. We can also interpolate both the geometry styles and the appearance styles (Figure 12). These results show the clear disentanglement of our 3D representation.
275
+
276
+ ![](images/8ab2ba4ff5cd2044bcafae38e595a31eb99f389503ece2b7b84028a1763f0c5c.jpg)
277
+ Figure 12. Interpolation. In each $5 \times 5$ grid, the images at the top left and bottom right are generated from the input maps next to them. Each row interpolates two images in label space, while each column interpolates the appearance. For camera poses, we interpolate the pitch along the row and the yaw along the column.
278
+
279
+ # 5. Discussion
280
+
281
+ We have introduced pix2pix3D, a 3D-aware conditional generative model for controllable image synthesis. Given a 2D label map, our model allows users to render images given any viewpoint. Our model augments the neural field with 3D labels, assigning label, color, and density to every 3D point, allowing for the simultaneous rendering of the image and a pixel-aligned label map. The learned 3D labels further enable interactive 3D cross-view editing. We discuss the limitations and societal impact in our arXiv version.
282
+
283
+ Acknowledgments. We thank Sheng-Yu Wang, Nupur Kumari, Gaurav Parmer, Ruihan Gao, Muyang Li, George Cazenavette, Andrew Song, Zhipeng Bao, Tamaki Kojima, Krishna Wadhwani, Takuya Narihira, and Tatsuo Fujiwara for their discussion and help. We are grateful for the support from Sony Corporation, Singapore DSTA, and the CMU Argo AI Center for Autonomous Vehicle Research.
284
+
285
+ # References
286
+
287
+ [1] Rameen Abdal, Yipeng Qin, and Peter Wonka. Image2stylegan: How to embed images into the stylegan latent space? In IEEE International Conference on Computer Vision (ICCV), 2019. 2
288
+ [2] Shir Amir, Yossi Gandelsman, Shai Bagon, and Tali Dekel. Deep vit features as dense visual descriptors. ECCVW What is Motion For?, 2022. 6
289
+ [3] David Bau, Hendrik Strobelt, William Peebles, Jonas Wulff, Bolei Zhou, Jun-Yan Zhu, and Antonio Torralba. Semantic photo manipulation with a generative image prior. In ACM SIGGRAPH, 2019. 2
290
+ [4] Mikołaj Binkowski, Danica J Sutherland, Michael Arbel, and Arthur Gretton. Demystifying mmd gans. In International Conference on Learning Representations (ICLR), 2018. 6
291
+ [5] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations (ICLR), 2019. 2
292
+ [6] Shengqu Cai, Anton Obukhov, Dengxin Dai, and Luc Van Gool. Pix2nerf: Unsupervised conditional $\pi$ -gan for single image to neural radiance fields translation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2, 5, 6, 7
293
+ [7] Caroline Chan, Frédo Durand, and Phillip Isola. Learning to generate line drawings that convey geometry and semantics. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 6
294
+ [8] Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. Efficient geometry-aware 3D generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2, 3, 4, 6, 8
295
+ [9] Eric R Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 2
296
+ [10] Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An Information-Rich 3D Model Repository. Technical Report arXiv:1512.03012 [cs.GR], Stanford University — Princeton University — Toyota Technological Institute at Chicago, 2015. 2, 5, 6
297
+ [11] Anpei Chen, Ruiyang Liu, Ling Xie, Zhang Chen, Hao Su, and Jingyi Yu. Sofgan: A portrait image generator with dynamic styling. In ACM SIGGRAPH, 2021. 2, 5, 6, 7
298
+ [12] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In European Conference on Computer Vision (ECCV), 2022. 2
299
+ [13] Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, and Hao Su. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 2
300
+
301
+ [14] Tao Chen, Zhe Zhu, Ariel Shamir, Shi-Min Hu, and Daniel Cohen-Or. 3-sweep: Extracting editable objects from a single photo. ACM Transactions on Graphics (TOG), 32(6):1-10, 2013. 2
302
+ [15] Yuedong Chen, Qianyi Wu, Chuanxia Zheng, Tat-Jen Cham, and Jianfei Cai. Sem2nerf: Converting single-view semantic masks to neural radiance fields. In European Conference on Computer Vision (ECCV), 2022. 2
303
+ [16] Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2, 5, 6
304
+ [17] JOHANNA Delanoy, ADRIEN Bousseau, MATHIEU Aubry, PHILLIP Isola, and ALEXEIA A Efros. What you sketch is what you get: 3d sketching using multi-view deep volumetric prediction. In ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D), 2018. 2
305
+ [18] Kangle Deng, Andrew Liu, Jun-Yan Zhu, and Deva Ramanan. Depth-supervised NeRF: Fewer views and faster training for free. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2
306
+ [19] Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 1
307
+ [20] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, 2014. 1, 2, 4
308
+ [21] Jiatao Gu, Lingjie Liu, Peng Wang, and Christian Theobalt. Stylenerf: A style-based 3d aware generator for high-resolution image synthesis. In International Conference on Learning Representations (ICLR), 2022. 2, 3, 4, 6, 8
309
+ [22] Philipp Henzler, Niloy J Mitra, and Tobias Ritschel. Escaping plato's cave: 3d shape from adversarial rendering. In IEEE International Conference on Computer Vision (ICCV), 2019. 2
310
+ [23] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two timescale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems (NeurIPS), 2017. 6
311
+ [24] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems (NeurIPS), 2020. 1
312
+ [25] Hsin-Ping Huang, Hung-Yu Tseng, Hsin-Ying Lee, and Jia-Bin Huang. Semantic view synthesis. In European Conference on Computer Vision (ECCV), 2020. 2
313
+ [26] Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. In European Conference on Computer Vision (ECCV), 2018. 2
314
+ [27] Zeng Huang, Tianye Li, Weikai Chen, Yajie Zhao, Jun Xing, Chloe Legendre, Linjie Luo, Chongyang Ma, and Hao Li. Deep volumetric video from very sparse multi-view performance capture. In European Conference on Computer Vision (ECCV), pages 351-369, 2018. 2
315
+
316
+ [28] Takeo Igarashi, Satoshi Matsuoka, and Hidehiko Tanaka. Teddy: a sketching interface for 3d freeform design. In ACM SIGGRAPH, 1999. 2
317
+ [29] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 1, 2
318
+ [30] Ajay Jain, Matthew Tancik, and Pieter Abbeel. Putting nerf on a diet: Semantically consistent few-shot view synthesis. In IEEE International Conference on Computer Vision (ICCV), 2021. 2
319
+ [31] Kaiwen Jiang, Shu-Yu Chen, Feng-Lin Liu, Hongbo Fu, and Lin Gao. Nerffaceediting: Disentangled face editing in neural radiance fields. In ACM SIGGRAPH Asia, 2022. 2
320
+ [32] James T Kajiya and Brian P Von Herzen. Ray tracing volume densities. ACM SIGGRAPH, 18(3):165-174, 1984. 3
321
+ [33] Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Alias-free generative adversarial networks. In Advances in Neural Information Processing Systems (NeurIPS), 2021. 2
322
+ [34] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 1, 2, 6, 8
323
+ [35] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2
324
+ [36] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of StyleGAN. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 8
325
+ [37] Natasha Kholgade, Tomas Simon, Alexei Efros, and Yaser Sheikh. 3d object manipulation in a single photograph using stock 3d models. ACM Transactions on Graphics (TOG), 33(4):1-12, 2014.
326
+ [38] Cheng-Han Lee, Ziwei Liu, Lingyun Wu, and Ping Luo. Maskgan: Towards diverse and interactive facial image manipulation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2, 5, 6, 7
327
+ [39] Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey. Barf: Bundle-adjusting neural radiance fields. In IEEE International Conference on Computer Vision (ICCV), 2021. 2
328
+ [40] Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-to-image translation networks. Advances in neural information processing systems, 30, 2017. 2
329
+ [41] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015. 5
330
+ [42] Zhaoliang Lun, Matheus Gadelha, Evangelos Kalogerakis, Subhransu Maji, and Rui Wang. 3d shape reconstruction from sketches via multi-view convolutional networks. In 2017 International Conference on 3D Vision (3DV). IEEE, 2017. 2
331
+
332
+ [43] Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy, and Daniel Duckworth. NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 2
333
+ [44] Quan Meng, Anpei Chen, Haimin Luo, Minye Wu, Hao Su, Lan Xu, Xuming He, and Jingyi Yu. Gnerf: Gan-based neural radiance field without posed camera. In IEEE International Conference on Computer Vision (ICCV), 2021. 2
334
+ [45] Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for gans do actually converge? In International Conference on Machine Learning (ICML), 2018. 4
335
+ [46] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In European Conference on Computer Vision (ECCV), 2020. 2, 3
336
+ [47] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. 2
337
+ [48] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. In ACM SIGGRAPH, 2022. 2
338
+ [49] Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. Hologan: Unsupervised learning of 3d representations from natural images. In IEEE International Conference on Computer Vision (ICCV), 2019. 2
339
+ [50] Michael Niemeyer, Jonathan T Barron, Ben Mildenhall, Mehdi SM Sajjadi, Andreas Geiger, and Noha Radwan. Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2
340
+ [51] Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 2
341
+ [52] Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher-Shlizerman. Stylesdf: High-resolution 3d-consistent image and geometry generation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2, 4
342
+ [53] Xingang Pan, Xudong Xu, Chen Change Loy, Christian Theobalt, and Bo Dai. A shading-guided generative implicit model for shape-accurate 3d-aware image synthesis. In Advances in Neural Information Processing Systems (NeurIPS), 2021. 2
343
+ [54] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2
344
+ [55] Taesung Park, Alexei A Efros, Richard Zhang, and Jun-Yan Zhu. Contrastive learning for unpaired image-to-image translation. In European Conference on Computer Vision (ECCV), 2020. 2
345
+
346
+ [56] Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu. Semantic image synthesis with spatially-adaptive normalization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 1, 2
347
+ [57] Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu. Semantic image synthesis with spatially-adaptive normalization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 6, 7
348
+ [58] Gaurav Parmar, Richard Zhang, and Jun-Yan Zhu. On aliased resizing and surprising subtleties in gan evaluation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 6
349
+ [59] Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, and Dani Lischinski. Styleclip: Text-driven manipulation of stylegan imagery. In IEEE International Conference on Computer Vision (ICCV), 2021. 2
350
+ [60] Nataniel Ruiz, Eunji Chong, and James M. Rehg. Fine-grained head pose estimation without keypoints. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshop, 2018. 5
351
+ [61] Sara Fridovich-Keil and Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2
352
+ [62] Edgar Schonfeld, Vadim Sushko, Dan Zhang, Juergen Gall, Bernt Schiele, and Anna Khoreva. You only need adversarial supervision for semantic image synthesis. In International Conference on Learning Representations (ICLR), 2020. 2
353
+ [63] Edgar Schonfeld, Vadim Sushko, Dan Zhang, Juergen Gall, Bernt Schiele, and Anna Khoreva. You only need adversarial supervision for semantic image synthesis. In International Conference on Learning Representations (ICLR), 2021. 7
354
+ [64] Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. Graf: Generative radiance fields for 3d-aware image synthesis. In Advances in Neural Information Processing Systems (NeurIPS), 2020. 2
355
+ [65] Yujun Shen and Bolei Zhou. Closed-form factorization of latent semantics in gans. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 2
356
+ [66] Zhuo Su, Wenzhe Liu, Zitong Yu, Dewen Hu, Qing Liao, Qi Tian, Matti Pietikainen, and Li Liu. Pixel difference networks for efficient edge detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 5, 6, 7
357
+ [67] Edgar Sucar, Shikun Liu, Joseph Ortiz, and Andrew Davison. iMAP: Implicit mapping and positioning in real-time. In Proceedings of the International Conference on Computer Vision (ICCV), 2021. 2
358
+ [68] Jingxiang Sun, Xuan Wang, Yichun Shi, Lizhen Wang, Jue Wang, and Yebin Liu. Ide-3d: Interactive disentangled editing for high-resolution 3d-aware portrait synthesis. In ACM Transactions on Graphics (TOG), 2022. 2
359
+ [69] Jingxiang Sun, Xuan Wang, Yong Zhang, Xiaoyu Li, Qi Zhang, Yebin Liu, and Jue Wang. Fenerf: Face editing in neural radiance fields. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2, 4
360
+ [70] Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P Srinivasan, Jonathan T Barron, and Ren
361
+
362
+ Ng. Learned initializations for optimizing coordinate-based neural representations. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 2
363
+ [71] Ayush Tewari, Mohamed Elgharib, Gaurav Bharaj, Florian Bernard, Hans-Peter Seidel, Patrick Pérez, Michael Zollhofer, and Christian Theobalt. Stylerig: Rigging stylegan for 3d control over portrait images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2
364
+ [72] Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, and Daniel Cohen-Or. Designing an encoder for stylegan image manipulation. In ACM Transactions on Graphics (TOG), 2021. 2
365
+ [73] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 2
366
+ [74] Shangzhe Wu, Christian Rupprecht, and Andrea Vedaldi. Unsupervised learning of probably symmetric deformable 3d objects from images in the wild. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 6
367
+ [75] Xiaohua Xie, Kai Xu, Niloy J Mitra, Daniel Cohen-Or, Wenyong Gong, Qi Su, and Baoquan Chen. Sketch-to-design: Context-based part assembly. In Computer Graphics Forum, volume 32, pages 233–245. Wiley Online Library, 2013. 2
368
+ [76] Yinghao Xu, Sida Peng, Ceyuan Yang, Yujun Shen, and Bolei Zhou. 3d-aware image synthesis via learning structural and textural representations. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2
369
+ [77] Shunyu Yao, Tzu Ming Hsu, Jun-Yan Zhu, Jiajun Wu, Antonio Torralba, Bill Freeman, and Josh Tenenbaum. 3d-aware scene manipulation via inverse graphics. In Advances in Neural Information Processing Systems (NeurIPS), 2018. 2
370
+ [78] Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. Pixelnerf: Neural radiance fields from one or few images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 2
371
+ [79] Jichao Zhang, Enver Sangineto, Hao Tang, Aliaksandr Siarohin, Zhun Zhong, Nicu Sebe, and Wei Wang. 3d-aware semantic-guided generative model for human synthesis. In European Conference on Computer Vision (ECCV), 2022. 2
372
+ [80] Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492, 2020. 2
373
+ [81] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 4
374
+ [82] Jiapeng Zhu, Yujun Shen, Deli Zhao, and Bolei Zhou. Indomain gan inversion for real image editing. In European Conference on Computer Vision (ECCV), 2020. 2
375
+ [83] Jun-Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A Efros. Generative visual manipulation on the natural image manifold. In European Conference on Computer Vision (ECCV), 2016. 2, 8
376
+ [84] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent
377
+
378
+ adversarial networks. In IEEE International Conference on Computer Vision (ICCV), 2017. 1, 2
379
+ [85] Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A Efros, Oliver Wang, and Eli Shechtman. Toward multimodal image-to-image translation. Advances in neural information processing systems, 30, 2017. 6
380
+ [86] Jun-Yan Zhu, Zhoutong Zhang, Chengkai Zhang, Jiajun Wu, Antonio Torralba, Josh Tenenbaum, and Bill Freeman. Visual object networks: Image generation with disentangled 3d representations. In Advances in Neural Information Processing Systems (NeurIPS), 2018. 2
381
+ [87] Peihao Zhu, Rameen Abdal, Yipeng Qin, and Peter Wonka. Sean: Image synthesis with semantic region-adaptive normalization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2, 5, 6, 7
382
+ [88] Zihan Zhu, Songyou Peng, Viktor Larsson, Weiwei Xu, Hujun Bao, Zhaopeng Cui, Martin R. Oswald, and Marc Pollefeys. Nice-slam: Neural implicit scalable encoding for slam. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2
3dawareconditionalimagesynthesis/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b57c072f5df3e2f34fbf1eea3f18a3e61f85b6cf788c0da8bf7d11f293b08b1a
3
+ size 990518
3dawareconditionalimagesynthesis/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8b154df859717179fecf43723d96ffadc12c48713ffff42de610b7bb8836916
3
+ size 502877
3dawarefaceswapping/66d1bee4-1a69-4f6f-8a65-3f5202fddfc5_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:725bf366881d93ac6d8aac128e25dca7aabc1947126161d17a0cc3cd4b87b777
3
+ size 74719
3dawarefaceswapping/66d1bee4-1a69-4f6f-8a65-3f5202fddfc5_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ccaa38e9e2ecf7d0dac5d31adaf64826d95eb7d2f665d64ce617a6f645ba0d21
3
+ size 94969
3dawarefaceswapping/66d1bee4-1a69-4f6f-8a65-3f5202fddfc5_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44ee50f1431760db219c4566fa42074fbe8d77c4e57d104f44a1327417a2110b
3
+ size 9303477
3dawarefaceswapping/full.md ADDED
@@ -0,0 +1,338 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3D-Aware Face Swapping
2
+
3
+ Yixuan Li Chao Ma* Yichao Yan* Wenhan Zhu Xiaokang Yang MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China {lyx0208, chaoma, yanyichao, zhuwenhan823, xkyang}@sjtu.edu.cn
4
+
5
+ ![](images/db2a32cb00c1a096529925fa25c9fba4b1870e608e0ae2061971e078920cfdd6.jpg)
6
+ Source
7
+
8
+ ![](images/645096ca6885eef89157cd30fb1a86f49e4367afe473d1792a150ff241f269a7.jpg)
9
+ Target
10
+
11
+ ![](images/2beb072e7b4fef458b7c6794a12f1ee2512754c90d1d15db8a9f3820adc3bb98.jpg)
12
+ Intermediate Views
13
+ Figure 1. Demonstration of the proposed 3dSwap. Given single-view source and target images, our method synthesizes high-fidelity and multi-view-consistent images of the swapped faces and the corresponding geometries. More results can be found on our project page.
14
+
15
+ ![](images/eb4dbc647fb79c18b1d27f2fa5a36f98974b2b986d386ec70793a2f5050469c1.jpg)
16
+ Source View
17
+
18
+ ![](images/f7226b694a58a983e49cf7078fa142fa89701e0100628cdfba1e0c25c2b1f54b.jpg)
19
+ Geometry
20
+
21
+ # Abstract
22
+
23
+ Face swapping is an important research topic in computer vision with wide applications in entertainment and privacy protection. Existing methods directly learn to swap 2D facial images, taking no account of the geometric information of human faces. In the presence of large pose variance between the source and the target faces, there always exist undesirable artifacts on the swapped face. In this paper, we present a novel 3D-aware face swapping method that generates high-fidelity and multi-view-consistent swapped faces from single-view source and target images. To achieve this, we take advantage of the strong geometry and texture prior of 3D human faces, where the 2D faces are projected into the latent space of a 3D generative model. By disentangling the identity and attribute features in the latent space, we succeed in swapping faces in a 3D-aware manner, being robust to pose variations while transferring fine-grained facial details. Extensive experiments demonstrate the superiority of our 3D-aware face swapping framework in terms of visual quality, identity similarity, and multi-view consistency. Code is available at https://1yx0208.github.io/3dSwap.
24
+
25
+ # 1. Introduction
26
+
27
+ Face swapping aims to transfer the identity of a person in the source image to another person in the target image while preserving other attributes like head pose, expression, illumination, background, etc. It has attracted extensive attention recently in the academic and industrial world for its potential wide applications in entertainment [14,30,38] and privacy protection [7,37,48].
28
+
29
+ The key of face swapping is to transfer the geometric shape of the facial region (i.e., eyes, nose, mouth) and detailed texture information (such as the color of eyes) from the source image to the target image while preserving both geometry and texture of non-facial regions (i.e., hair, background, etc). Currently, some 3D-based methods consider geometry prior of human faces by fitting the input image to 3D face models such as 3D Morphable Model (3DMM) [8] to overcome the differences of face orientation and expression between sources and targets [7, 15, 34, 43]. However, these parametric face models only produce coarse frontal faces without fine-grained details, leading to low-resolution and fuzzy swapping results. On the other hand, following Generative Adversarial Network [24], GAN-based [6, 23, 32, 39, 40, 42] or GAN-inversion-based [44, 55, 57, 60] approaches adopt the ad
30
+
31
+ versarial training strategy to learn texture information from inputs. Despite the demonstrated photorealistic and high-resolution images, the swapped faces via 2D GANs sustain undesirable artifacts when two input faces undergo large pose variation since the strong 3D geometry prior of human faces is ignored. Moreover, learning to swap faces in 2D images makes little use of the shaped details from sources, leading to poorer performance on identity transferring.
32
+
33
+ Motivated by the recent advances of 3D generative models [12, 13, 20, 25, 45] in synthesizing multi-view consistent images and high-quality 3D shapes, it naturally raises a question: can we perform face swapping in a 3D-aware manner to exploit the strong geometry and texture priors? To answer this question, two challenges arise. First, how to infer 3D prior directly from 3D-GAN models still remains open. Current 3D-aware generative models synthesize their results from a random Gaussian noise $z$ , so that their output images are not controllable. This increases the complexity of inferring the required prior from arbitrary input. Second, the inferred prior corresponding to input images is in the form of a high-dimension feature vector in the latent space of 3D GANs. Simply synthesizing multi-view target images referring to the prior and applying 2D face swapping to them produces not only inconsistent artifacts but also a heavy computational load.
34
+
35
+ To address these challenges, we systematically investigate the geometry and texture prior of these 3D generative models and propose a novel 3D-aware face swapping framework 3dSwap. We introduce a 3D GAN inversion framework to project the 2D inputs into the 3D latent space, motivated by recent GAN inversion approaches [46, 47, 51]. Specifically, we design a learning-based inversion algorithm that trains an encoding network to efficiently and robustly project input images into the latent space of EG3D [12]. However, directly borrowing the architecture from 2D approaches is not yet enough since a single-view input provides limited information about the whole human face. To further improve the multi-view consistency of latent code projection, we design a pseudo-multi-view training strategy. This design effectively bridges the domain gap between 2D and 3D. To tackle the second problem, we design a face swapping algorithm based on the 3D latent codes and directly synthesize the swapped faces with the 3D-aware generator. In this way, we achieve 3D GAN-inversion-based face swapping by a latent code manipulating algorithm consisting of style-mixing and interpolation, where latent code interpolation is responsible for identity transferring while style-mixing helps to preserve attributes.
36
+
37
+ In summary, our contributions are threefold:
38
+
39
+ - To the best of our knowledge, we first address the 3D-aware face swapping task. The proposed 3dSwap method sets a strong baseline and we hope this work will foster future research into this task.
40
+
41
+ - We design a learning-based 3D GAN inversion with the pseudo-multi-view training strategy to extract geometry and texture prior from arbitrary input images. We further utilize these strong prior by designing a latent code manipulating algorithm, with which we directly synthesize the final results with the pretrained generator.
42
+ - Extensive experiments on benchmark datasets demonstrate the superiority of the proposed 3dSwap over state-of-the-art 2D face swapping approaches in identity transferring. Our reconstruction module for 3DGAN inversion performs favorably over the state-of-the-art methods as well.
43
+
44
+ # 2. Related Work
45
+
46
+ Face Swapping. Face swapping has emerged as a popular research topic in the field of computer vision in recent years. Currently, it can be classified into two categories: 3D-based and GAN-based methods. Specifically, 3D-based methods [7, 15, 34, 43] fit input images into 3D parametric face models (i.e. 3DMM [8]) to overcome the problems of posture or perspective difference between input images. However, the performance of such methods is usually limited by the reconstruction results. GAN-based methods [6, 18, 23, 31, 32, 39, 40, 42] adopt the adversarial training strategy to generate photorealistic fake faces.
47
+
48
+ Early GAN-based face swapping methods are subject-specific, i.e. DeepFake [18] and Korshunova et al. [31] are required to train different models for different inputs. The subject-specific approaches have limited real applications since face swapping is required to be applicable to any unseen pair of input images, and such limitation is addressed in latter subject-agnostic face swapping approaches [6, 23, 32, 39, 40, 42]. To increase the resolution of generated images, MegaFS [60] firstly proposes a GAN-inversion-based face swapping method, utilizing StyleGAN [28] to synthesize megapixel-level swapping faces. Xu et al. [56] and StyleSwap [57] integrate the StyleGAN2 [29] generator to their face swapping pipeline, applying its strong prior to generate high-resolution swapped faces. Following these approaches, we furtherly extend the face swapping task into 3D latent space to capture fine-grained details of face shape and strengthen the robustness under large pose variance.
49
+
50
+ 3D-Aware Generative Models. The 3D-aware generative models are aimed to synthesize 3D-aware (i.e., can be explicitly controlled by the camera pose) images from 2D image collections. HoloGAN [41] firstly proposes a 3D-aware generative model through learning the voxel features, whereas it only generates low-resolution results due to the limitation of computational cost. Recently, several works utilize the NeRF [36] representation [12, 20, 25, 45, 50]. GRAF [50] adopts the approach of patch sampling to elim
51
+
52
+ ![](images/20011f7b7de3afcc456fd6d41159db98b0d24f7f9b0e11b9c733257da699b212.jpg)
53
+ Figure 2. The pipeline of our 3D-aware face swapping method, 3dSwap. In the first stage, we infer 3D geometry and texture prior of both source and target images with an encoder. We then design a latent code manipulation algorithm consisting of style mixing and interpolation to conduct face swapping based on these priors. Finally, swapped faces in any view direction can be synthesized by 3dSwap after fine-tuning the parameters of the generator following the joint pivot tuning optimization.
54
+
55
+ inate computational costs during training. GRAM [20] estimates radiance manifolds to produce realistic images with fine details and strong 3D consistency. StyleNeRF [25] integrates NeRF with style-based generators and proposes a better up-sampler and a new regularization loss to mitigate inconsistencies. StyleSDF [45] presents a Signed Distance Field (SDF) based on 3D modeling that defines detailed 3D surfaces. EG3D [12] raises a novel tri-plane representation for efficient 3D-aware image generation. Due to the strong generative capability of these 3D-aware generative models, we leverage them to infer fine 3D prior from 2D images for our 3D-aware face swapping framework.
56
+
57
+ GAN Inversion. Since Generative Adversarial Network [24], numerous generative models reflect great abilities in synthesizing high-quality images [9,12,25,28,29,45]. To fully leverage these well-trained GANs, the task of GAN inversion emerges recently. In particular, GAN inversion is aimed to project a given image back to a vector $w$ in the latent space of a pretrained GAN model so that this image can be faithfully reconstructed from $w$ by the generator.
58
+
59
+ Early works invert images into Gaussian noise $z \in R^{1 \times 512}$ or semantic latent space $\mathcal{W} \in R^{1 \times 512}$ [1,16,17,59]. Abdal et al. [2] firstly extend latent space to $\mathcal{W} + \in R^{18 \times 512}$ for more accurate reconstruction. To predict the latent code, learning-based methods [3,26,46,51,52] train an encoder for latent projection, while optimization-based methods [1,2,16,17] directly find the optimal code step-by-step from noise. Hybrid methods [4,47,59] combine both to optimize latent codes initialized by encoders.
60
+
61
+ In addition, there are a few inversion works for 3D generative models. Pix2NeRF [10] is proposed to generate Neural Radiance Fields (NeRF) [36] of an object applying a
62
+
63
+ single input image based on a pretrained $\pi$ -GAN [13]. Connor et al. [33] leverage EG3D [12] and a pretrained 3DMM predictor [22] to reconstruct a 3D human face, which could be further animated or edited. Our reconstruction model is also in this catalog, while the adopted learning-based algorithm is more robust and efficient compared with them.
64
+
65
+ # 3. Method
66
+
67
+ # 3.1. Overview
68
+
69
+ Given single-view source and target images, we aim to synthesize multi-view-consistent face images with identity from source image $x_{s}$ and other attributes from target image $x_{t}$ . Fig. 2 demonstrates the overall pipeline and notations of the proposed 3dSwap. First, to extract accurate geometry and texture prior from 2D images, we conduct a learning-based 3D GAN inversion, training an encoding network to project the inputs into the latent space of a 3D-aware generative model. Specifically, we design a pseudomulti-view optimization strategy to train the encoder with a feature pyramid architecture from pSp [46], empowering the latent code projection with the 3D consistency of the state-of-the-art 3D GAN, i.e. EG3D [12] (Sec. 3.2). Then, to disentangle identity from attributes in the latent space, we design a latent code manipulation algorithm consisting of style mixing and interpolation (Sec. 3.3). Finally, for the purpose of improving the overall quality of our results, bridging the gap between 2D image generating and 3D rendering, we implement a joint pivot tuning on parameters of the pretrained EG3D generator (Sec. 3.4). The networks are trained with a set of well-designed loss functions to enforce identity transferring and attribute preserving (Sec. 3.5).
70
+
71
+ # 3.2. Inferring 3D Prior from 2D Images
72
+
73
+ To infer geometry and texture prior from a 2D image, we leverage the state-of-the-art 3D-aware generative model, i.e. EG3D [12] by projecting the inputs into its latent space. Since the optimization-based algorithm [47] is inefficient and less robust to non-front faces, we propose a learning-based inversion algorithm where an encoding network is trained to project the single-view inputs into the 3D latent space. Different from 2D StyleGAN-like models which totally rely on the latent code $w$ to generate the corresponding output: $y = \mathcal{G}(w)$ , the 3D-aware generative model has an extra input $d$ which controls the pose of synthesized image: $y = \mathcal{G}(w,d)$ . This indicates that latent codes and generated images are not bijections for 3D GANs since multi-view images of the same person can be synthesized using the same $w$ but different $d$ . Taking this property into account, we design a pseudo-multi-view training strategy, using a generated image in a different view from the source image to improve the consistency of latent code projection. Fig. 3 illustrates the pipeline of our design.
74
+
75
+ Specifically, we first use an encoder to project the input image $x$ into the latent space $\mathcal{W}$ and get a high-dimension intermediate latent vector $w_{x} = \mathcal{E}_{\theta}(x)$ , where $\mathcal{E}_{\theta}(\cdot)$ is the pSp encoder with parameters $\theta$ . Then, with the pretrained EG3D generator $\mathcal{G}(\cdot, \cdot)$ and input direction $d$ estimated by Deep3d Face Reconstruction [21], we synthesize the reconstructed result $x' = \mathcal{G}(w_{x}, d)$ . For a 2D GAN inversion approach, this ground-truth and reconstructed image pair $(x, x')$ is enough, but it is inadequate for 3D GANs due to the non-bijective property.
76
+
77
+ Ideally, this issue can be addressed by feeding multi-view images of a person into the encoder and minimizing the distance between their output vectors. However, it is difficult to obtain large-scale multi-view data, and we usually only have single-view images of a person in the training dataset. To this end, we additionally sample a random direction $\hat{d}$ and use the generator to synthesize $\hat{x} = \mathcal{G}(w_x,\hat{d})$ with the same latent code. This output image $\hat{x}$ , which is called a pseudo-input since it is generated by the 3D GAN, is again fed into the encoder-decoder structure to get $w_{\hat{x}} = \mathcal{E}_{\theta}(\hat{x})$ and $\hat{x}^{\prime} = \mathcal{G}(w_{\hat{x}},d)$ .
78
+
79
+ Now, we can define our optimization objectives. Following the usual inversion approaches, we apply some pixelwise loss functions between the input $x$ and its reconstruction $x'$ . Under the setting of our pseudo-multi-view input, we add constraints between the two latent codes $w_{x}$ and $w_{\hat{x}}$ for the purpose of maintaining 3D consistency. We further restrain pixel-level distance between the second-order output $\hat{x}'$ synthesized with $w_{\hat{x}}$ and the origin input $x$ to reinforce such constraint. In summary, this three-termed optimization can be written as:
80
+
81
+ $$
82
+ \min _ {\theta} \left\{\mathcal {L} \left(x, x ^ {\prime}\right) + \eta \mathcal {L} \left(x, \hat {x} ^ {\prime}\right) + \mathcal {L} \left(w _ {x}, w _ {\hat {x}}\right) \right\}, \tag {1}
83
+ $$
84
+
85
+ ![](images/5f4dffb41fb01f22db715da2fc91824845b9416888bef1bcd43edbc53ea6e5aa.jpg)
86
+ Figure 3. The pipeline of our pseudo-multi-view training strategy.
87
+
88
+ where $\theta$ is the parameter of encoder, $\eta$ is a trade-off parameter and $\mathcal{L}(\cdot ,\cdot)$ denotes the loss functions which will be further discussed in Sec. 3.5. After optimizing the parameters of the encoding network with this strategy, we can obtain rather accurate 3D prior $w_{x}$ from any given input $x$ .
89
+
90
+ # 3.3. Face Swapping via Latent Code Manipulation
91
+
92
+ To take full advantage of the prior extracted from the 3D GAN model, we calculate the latent code for the swapped face based on latent codes $w_{s} = \mathcal{E}_{\theta}(x_{s})$ of the source image $x_{s}$ and $w_{t} = \mathcal{E}_{\theta}(x_{t})$ of the target image $x_{t}$ . Before that, we step back and think about what these latent codes represent.
93
+
94
+ A face image usually contains different attributes such as face shape, hairstyle, skin color, etc. With the encoder discussed in Sec. 3.2, we embed all these attributes in the high-dimension latent vectors. However, identity features depending on the geometry of facial region (i.e., eyes, nose, mouth, cheek, and so on) also implicitly lie in such latent codes. For the task of face swapping, it is desirable if identity features can be disentangled from attribute features in the latent code. Afterward, we can simply exchange the identity part of the latent codes to achieve face swapping.
95
+
96
+ Since such identity and attributes are typically entangled in the latent codes, we design an interpolation strategy between the source and target latent codes with learnable coefficients. Here, the source latent code $w_{s}$ plays a leading role in the identity part while $w_{t}$ dominates the others. To obtain these coefficients, we concatenate $w_{s}$ and $w_{t}$ to form a $1 \times 1024$ vector and feed it into a four-layer Multilayer Perceptron whose output $\rho$ is the interpolation coefficient.
97
+
98
+ Moreover, StyleGAN-like [28,29] models share the style mixing property of latent codes, which means that different layers of latent codes control different parts of attributes. For example, coarse spatial resolutions control high-level aspects like face shape and orientation while fine resolution latent control details like hair color. Motivated by this, we also investigate the layer-wise attributes in EG3D and observed similar properties. This allows us to generate more desirable swapping results by only performing interpolation
99
+
100
+ on part of the latent codes.
101
+
102
+ In summary, the latent code of swapped face $w_{fs}$ can be obtained by:
103
+
104
+ $$
105
+ w _ {f s} ^ {(i)} = \left\{ \begin{array}{c c} \rho^ {(i)} \times w _ {t} ^ {(i)} + (1 - \rho^ {(i)}) \times w _ {s} ^ {(i)} & i \in [ 5, 9 ], \\ w _ {t} ^ {(i)} & o t h e r w i s e, \end{array} \right. \tag {2}
106
+ $$
107
+
108
+ where the superscript $i$ denotes the layer-wise expression of $w_{fs}$ and the choice of layer, from layer 5 to layer 9, follows the definition of "middle" from StyleGAN [28], while a slight modification is made since the dimension of EG3D latent space is lower (i.e. $\mathcal{W} \in R^{14 \times 512}$ ). To better disentangle identity and attributes, we apply a Sigmoid-shaped activation function with a factor $\lambda = 100$ to the $\rho$ generated by MLPs, enforcing the coefficients to be closer to 0 or 1:
109
+
110
+ $$
111
+ \rho_ {n e w} ^ {(i)} = \left(1 + e ^ {- \lambda \rho_ {o l d} ^ {(i)}}\right) ^ {- 1}. \tag {3}
112
+ $$
113
+
114
+ # 3.4. Joint Pivot Tuning
115
+
116
+ With the encoding network trained by the well-designed optimization strategy in Sec. 3.2, we can project an input image into a code in the 3D latent space. However, the inevitable reconstruction error will degrade the performance of face swapping, which is a downstream task of 3D GAN inversion. Also, we observe that directly swap faces via latent manipulation leads to slight artifacts in the non-facial region. Motivated by PTI [47], we adopt pivot tuning on the parameters of the pretrained EG3D generator using a fixed latent code $w_{fs}$ from Sec. 3.3, but in an optimizing direction considering both reconstruction quality and face swapping performance. The process of this "joint" pivot tuning is:
117
+
118
+ $$
119
+ \min _ {\theta^ {*}} \left\{\mathcal {L} \left(x _ {s / t}, \mathcal {G} _ {\theta^ {*}} \left(w _ {s / t}, d _ {s / t}\right)\right) + \right. \tag {4}
120
+ $$
121
+
122
+ $$
123
+ \left. \mathcal {L} \left(x _ {t} \cdot M _ {f}, \mathcal {G} _ {\theta^ {*}} \left(w _ {f s}, d _ {t}\right) \cdot M _ {f}\right) \right\},
124
+ $$
125
+
126
+ where $\theta^{*}$ is the parameter of EG3D generator, $d_{s}$ is the direction of the source image, $M_{f}$ is a binary mask that shields facial region and $\mathcal{L}(\cdot ,\cdot)$ is the optimization constraint including MSE, LPIPS [58] and ID [19] losses.
127
+
128
+ Finally, with this finetuned generator and the latent code calculated by Eq. 2, we can synthesize the swapped face $y$ in any direction $d$ by:
129
+
130
+ $$
131
+ y = \mathcal {G} _ {\theta^ {*}} \left(w _ {f s}, d\right). \tag {5}
132
+ $$
133
+
134
+ # 3.5. Objective Functions
135
+
136
+ GAN Inversion Losses. In Eq. 1, we generally use $\mathcal{L}(\cdot ,\cdot)$ to denote the loss function of our pseudo-multi-view training strategy. Here, we give its detailed form. Following the previous work [46], we use three different objectives for supervising a pair of input image $x$ and reconstruction $x^{\prime}$ (and
137
+
138
+ the same for $\hat{x}^{\prime}$ ), including pixel-wise $\mathcal{L}_1$ loss, Learned Perceptual Image Path Similarity [58] loss $\mathcal{L}_{LPIPS}$ , and identity similarity loss $\mathcal{L}_{id}$ maximizing the cosine similarity between two identity embeddings estimated by ArcFace [19]. The total reconstruction loss between $x$ and $x^{\prime}$ is:
139
+
140
+ $$
141
+ \mathcal {L} _ {r e c} \left(x, x ^ {\prime}\right) = \lambda_ {1} \mathcal {L} _ {1} \left(x, x ^ {\prime}\right) + \lambda_ {2} \mathcal {L} _ {L P I S P} \left(x, x ^ {\prime}\right) \tag {6}
142
+ $$
143
+
144
+ $$
145
+ + \lambda_ {3} \mathcal {L} _ {i d} (x, x ^ {\prime}),
146
+ $$
147
+
148
+ where $\lambda_{1},\lambda_{2}$ and $\lambda_{3}$ are loss weights.
149
+
150
+ For the constraint between two latent codes, we adopt a cosine similarity:
151
+
152
+ $$
153
+ \mathcal {L} _ {l a t} \left(w _ {x}, w _ {\hat {x}}\right) = 1 - \cos \left(w _ {x}, w _ {\hat {x}}\right). \tag {7}
154
+ $$
155
+
156
+ Besides, we adopt the latent code regularization loss from pSp [46], which constrains the generated latent vector in a region to be close to the average latent vector:
157
+
158
+ $$
159
+ \mathcal {L} _ {r e g} (x) = \left\| \mathcal {E} _ {\theta} (x) - \bar {x} \right\| _ {2}, \tag {8}
160
+ $$
161
+
162
+ where $\bar{x}$ is the average of 10000 randomly sampled latent codes of EG3D generator. The overall loss function for 3D GAN inversion is:
163
+
164
+ $$
165
+ \begin{array}{l} \mathcal {L} _ {i n v} = \mathcal {L} _ {r e c} \left(x, x ^ {\prime}\right) + \eta \mathcal {L} _ {r e c} \left(x, \hat {x} ^ {\prime}\right) + \mathcal {L} _ {l a t} \left(w _ {x}, w _ {\hat {x}}\right) \tag {9} \\ + \mathcal {L} _ {r e g} (x). \\ \end{array}
166
+ $$
167
+
168
+ Face Swapping Losses. For training our face swapping module, we first design a masked pixel-wise $\mathcal{L}_2$ loss for the face irrelevant region:
169
+
170
+ $$
171
+ \mathcal {L} _ {2} \left(x _ {t}, y\right) = \left\| x _ {t} \cdot M _ {f} - y \cdot M _ {f} \right\| _ {2}, \tag {10}
172
+ $$
173
+
174
+ where $M_{f}$ is the binary mask same as in Sec. 3.4. We generate this mask according to the face segmentation labels of FFHQ [28] datasets. For 3D GAN inversion, we adopt the LPIPS [58] loss $\mathcal{L}_{LPIPS}(x_t,y)$ to learn the perceptual similarities and increase the quality of the generated images, and the binary mask is also added before feeding the image into the perceptual feature extractor.
175
+
176
+ For 3D-aware face swapping, we additionally synthesize the swapped face $\hat{y}$ in the view of the source image, calculating both $\mathcal{L}_{id}(x_s,y)$ and $\mathcal{L}_{id}(x_s,\hat{y})$ for better identity transferring.
177
+
178
+ Besides, $\mathcal{L}_{color}$ is designed to maintain the skin color of swapped faces:
179
+
180
+ $$
181
+ \mathcal {L} _ {\text {c o l o r}} (x _ {s}, y) = \| \bar {\mathcal {C}} (x _ {s} \cdot (1 - M _ {f})) - \bar {\mathcal {C}} (y \cdot (1 - M _ {f})) \| _ {2}, \tag {11}
182
+ $$
183
+
184
+ where $\bar{\mathcal{C}} (\cdot)$ denotes an average RGB value of the masked region.
185
+
186
+ The overall loss function for training the face swapping module is:
187
+
188
+ $$
189
+ \begin{array}{l} \mathcal {L} _ {f s} = \mathcal {L} _ {2} \left(x _ {t}, y\right) + \mathcal {L} _ {L P I P S} \left(x _ {t}, y\right) + \mathcal {L} _ {i d} \left(x _ {s}, y\right) \tag {12} \\ + \mathcal {L} _ {i d} (x _ {s}, \hat {y}) + \mathcal {L} _ {c o l o r} (x _ {s}, y). \\ \end{array}
190
+ $$
191
+
192
+ ![](images/f0959710d335e56833f3d38a55c940cce92e9bae72323375345958bc6bddd654.jpg)
193
+ Figure 4. Qualitative comparison of face swapping on CelebA-HQ dataset. Compared with all these 2D approaches, our method extracts facial shapes more accurately and transfers identity better. Moreover, since we conduct face swapping in latent space and a well-trained 3D GAN directly synthesizes the results, there are no obvious artifacts in the facial region.
194
+
195
+ # 4. Experiments
196
+
197
+ In this section, we first compare the proposed 3dSwap with some state-of-the-art 2D-images-based face swapping approaches. Furthermore, face swapping in a 3D-aware manner and extra evaluation metrics designed for 3D face swapping are analyzed. We finally carry out ablation studies to evaluate the effectiveness of our major design.
198
+
199
+ # 4.1. Implementation Details
200
+
201
+ In all experiments, Ranger optimizer [54] is applied to train our networks with a learning rate of $1 \times 10^{-4}$ . Hyperparameters are set as $\lambda_1 = \lambda_3 = 1$ , $\lambda_2 = 0.8$ in Eq. 6 and $\eta = 0.25$ in Eq. 9. For training time, the inversion module is trained for 1,000,000 steps on 4 NVIDIA RTX3090 GPUs for about 3 days while the face swapping module is trained for 500,000 steps also on 4 GPUs for about 2 days. The pivot tuning optimization during inference time takes about 8 minutes on a single GPU.
202
+
203
+ # 4.2. Datasets
204
+
205
+ We conduct experiments on two datasets: 1) The FFHQ [28] dataset contains 70,000 high-quality images of human faces crawled from Flicker with considerable variation in age, ethnicity, and background. All images of this dataset are in a resolution of $1024 \times 1024$ . 2) The CelebA-HQ [27] dataset is the high-quality version of the large-scale face attributes dataset CelebA [35] which contains 30,000 images in $1024 \times 1024$ . Specifically, we train our model on FFHQ, while comparison experiments are ex
206
+
207
+ ecuted on CelebA-HQ. We follow the data preprocessing way of EG3D to crop images according to facial landmarks and resize them into a resolution of $512 \times 512$ . Due to the relatively expensive inference cost of 3dSwap mentioned in Sec. 4.1, we operate the following comparison experiments on 1000 source-target image pairs.
208
+
209
+ # 4.3. Comparison with 2D Face Swapping Methods
210
+
211
+ In this section, we compare the proposed 3dSwap with four 2D swapping methods: SimSwap [14], MegaFS [60], Infoswap [23] and Xu et al. [56]. These four methods are representative GAN-based [14,23] and GAN-inversion-based [56,60] approaches in recent years with state-of-the-art performance. Moreover, their official source codes are publicly available for us to make fair comparisons.
212
+
213
+ Qualitative Comparison. The qualitative comparison results are shown in Fig. 4. Compared with all these 2D face swapping approaches, our methods transfer more accurate geometry features (i.e., facial contour) and detailed texture features like eye color to targets, reflecting better identity-transferring performance. Also, since we directly synthesize our final results with a well-trained generator with a properly calculated latent code, the swapped face we generate is more realistic without obvious artifacts in the facial region. More qualitative results on CelebA-HQ are provided in the supplementary material.
214
+
215
+ Quantitative Comparison. We adopt several evaluation metrics in our quantitative experiments to show the effectiveness of our model in Table 1. Following MegaFS [60],
216
+
217
+ <table><tr><td>Method</td><td>ID ↑</td><td>Pose ↓</td><td>Exp. ↓</td></tr><tr><td>SimSwap [14]</td><td>0.57</td><td>1.49</td><td>10.48</td></tr><tr><td>MegaFS [60]</td><td>0.48</td><td>3.95</td><td>14.08</td></tr><tr><td>InfoSwap [23]</td><td>0.61</td><td>2.50</td><td>10.63</td></tr><tr><td>Xu et al. [56]</td><td>0.54</td><td>2.66</td><td>12.94</td></tr><tr><td>Ours</td><td>0.72</td><td>1.68</td><td>13.76</td></tr></table>
218
+
219
+ we measure the ID similarity by calculating the cosine similarity between face embeddings of the source and swapped faces that are estimated by a pretrained face recognition network [19]. Meanwhile, pose error computes the $\mathcal{L}_2$ distance between the estimated Euler Angle [49] of the target and swapped images. For expression error, we calculate an average distance among estimated facial landmarks [5].
220
+
221
+ For cosine similarity of identity, which is a crucial indicator for face swapping since it evaluates the quality of identity transferring, we significantly outperform all these 2D approaches. Such results and the visual effects in Fig. 4 together show that our method transfers identity better due to the application of 3D prior. For attribute preserving, our method which can be explicitly controlled by a camera pose performs rather well in pose error since it is only slightly weaker than SimSwap [14] but it reflects a poorer performance compared with 2D approaches in expression error. However, we can still claim that the proposed 3dSwap is superior to 2D methods in identity transferring and performs close to them in attribute preserving after considering all three quantitative comparison results.
222
+
223
+ # 4.4. Further Analysis on 3D-Aware Face Swapping
224
+
225
+ As the first 3D-aware face swapping method, the proposed 3dSwap is specialized in synthesizing multi-view-consistent results. In this section, we conduct more experiments in this track, showing some visualized comparisons on 3D consistency and raising brand-new criteria for 3D-aware face swapping.
226
+
227
+ Visualization on Multi-View Images. To compare with 2D face swapping approaches in fairness, we first synthesize multi-view target images by using our reconstruction module and then apply SimSwap [14] and InfoSwap [23] to them. The visualized results are shown in Fig. 5, where results under different views are not as consistent as ours (i.e. shape of nose, mouth, and eyebrows changes) for the 2D face swapping method. More artifacts can be discovered when the target images are sideward. Please refer to the video in the supplementary material for more intuitional comparisons.
228
+
229
+ ![](images/95246c90f0dc73ab07aa12a8f25de081f74e28862b788dd74dc9d525b9343c5a.jpg)
230
+ Figure 5. Visualized comparison on Multi-view results among Infoswap [23], Simswap [14] and Ours.
231
+
232
+ Criteria for 3D-Aware Face Swapping. In Sec. 4.3, the performance of identity transferring is evaluated based on the face embedding estimated by pretrained face recognition networks [19]. However, such networks are not enough robust to pose variance so it could be an unfair criterion for face swapping. For 3D-aware face swapping, we can simply synthesize a swapped face in the view of the source image. In this way, the "Aligned Identity Similarity" can be a reasonable standard to evaluate 3D-aware face swapping models. Moreover, inspired by human's ability to recognize a familiarized person from any direction, we synthesize the swapped face into 9 different fixed poses and calculate an average identity similarity together with images in source and target views. We report our results of these two evaluation metrics in Table 2 and images under these fixed poses are shown in the supplementary material.
233
+
234
+ Table 1. Quantitative Results. We compare our model with four competing methods in ID Similarity for identity transferring and Pose & Expression Error for attribute preserving.
235
+
236
+ <table><tr><td>Metric</td><td>Aligned ID Sim.↑</td><td>Average ID Sim. ↑</td></tr><tr><td>Ours</td><td>0.85</td><td>0.42</td></tr></table>
237
+
238
+ Table 2. Quantitative Results of New Metrics. We test the proposed 3dSwap under the two new evaluation metrics.
239
+
240
+ # 4.5. Ablation Studies
241
+
242
+ In this section, we conduct ablation experiments on the CelebA-HQ dataset to evaluate the effectiveness of the major design of the proposed 3dSwap.
243
+
244
+ Effectiveness of 3D GAN Inversion. Since previous works [12, 33] do not release the code of their 3D GAN-inversion part, we follow the paper of EG3D to reproduce a pivot tuning inversion [47] to the generator with the same hyperparameters. In this section, we mainly compare our design with the optimization-based latent code projection of PTI on EG3D to show the effectiveness of the learning-based inversion algorithm we use. For the sake of fairness,
245
+
246
+ ![](images/d2f749adcbf42fd764f6462588f75b1fcd20e970194d065f742935a0f212b55a.jpg)
247
+ Figure 6. Qualitative Comparison on 3D GAN inversion. Comparing to the directly application of pivot tuning inversion, our design reconstruct details (i.e. shape and color of eyes, glasses etc.) better.
248
+
249
+ both models are tested on the same 2000 images in CelebAHQ and adopt a parameter tuning of the pretrained generator for 500 steps.
250
+
251
+ We show the qualitative comparison results in Fig. 6. Our design performs better in details reconstruction (i.e., eye shape, glasses, etc.) despite the optimization-based approach still recovers accurate face shape, hair color, etc.
252
+
253
+ For 3D GAN Inversion, we adopt the same metrics as 2D GAN inversion: $\mathcal{L}_2$ distance (or MSE loss) to calculate the pixel-wise similarity, LPIPS [58] distance to evaluate the perceptual similarity and MS-SSIM [53] to show the structural similarity. Additionally, we calculate ID similarity to ensure the accuracy of the reconstruction, and the results are reported in Table 3. Our design outperforms the optimization-based approaches in all of the four criteria.
254
+
255
+ <table><tr><td>Method</td><td>MSE ↓</td><td>LPIPS ↓</td><td>SSIM ↑</td><td>ID Sim.↑</td></tr><tr><td>EG3D with Opt.</td><td>0.0896</td><td>0.2761</td><td>0.6197</td><td>0.7318</td></tr><tr><td>Ours</td><td>0.0168</td><td>0.1049</td><td>0.7348</td><td>0.8616</td></tr></table>
256
+
257
+ Table 3. Quantitative Results on 3D GAN inversion. We compare our 3D GAN inversion module with an optimization-based inversion on EG3D under four common evaluation metrics in the 2D GAN inversion task.
258
+
259
+ Effectiveness of Style Mixing. As mentioned in Sec. 3.3, we adopt style mixing and latent code interpolation for face swapping. Here, we briefly show the effectiveness of style mixing. A comparison of our model with and without style mixing can be seen in Fig. 7. Identity can be ideally transferred between sources and targets under both settings, however, attributes including skin color, background, etc. would be prominently affected if we interpolate in all layers of latent codes as shown in the third column.
260
+
261
+ ![](images/1aca44d9bbf2b8e7a9caa5de4e20c2168eb55309c7848ef6e6c14b1390186bd2.jpg)
262
+ Figure 7. Visualization of face swapping results with and without style mixing.
263
+
264
+ # 5. Conclusion
265
+
266
+ We propose a novel 3D-aware face swapping method 3dSwap that generates high-fidelity and multi-view-consistent swapped faces. To leverage both geometry and texture prior of the 3D human face, we project the input images into the latent space of the 3D-aware generative model by introducing a learning-based inversion. A latent code manipulation algorithm, consisting of style mixing and latent code interpolation, is then designed to achieve 3D GAN-inversion-based face swapping. We further bridge the image quality between 2D generating and 3D rendering by applying a joint pivot tuning. To the best of our knowledge, 3dSwap is the first 3D-aware face swapping method, thus it sets a strong baseline for future research on 3D forgery detection and face swapping.
267
+
268
+ Limitations. Since we need to project input images into the latent space of a 3D GAN which contains far more information than that of 2D GANs, we tune the parameters of the pretrained generator during testing, leading to a rather long inference time. Moreover, since the final results are rendered by a 3D generator, our method fails to accurately reconstruct clothing, backgrounds, etc in the image limited by the current development of 3D-aware generative models.
269
+
270
+ Broader Impacts. Although not the purpose of this work, photorealistic swapped faces may potentially be abused. On the other hand, our model can be used to generate high-quality and multi-viewed examples to facilitate face forgery detection [11].
271
+
272
+ Acknowledgements. This work was supported by NSFC (62201342), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), and the Fundamental Research Funds for the Central Universities.
273
+
274
+ # References
275
+
276
+ [1] Rameen Abdul, Yipeng Qin, and Peter Wonka. Image2stylegan: How to embed images into the stylegan latent space? In ICCV, pages 4431-4440, 2019.
277
+ [2] Rameen Abdul, Yipeng Qin, and Peter Wonka. Image2stylegan++: How to edit the embedded images? In CVPR, pages 8293-8302, 2020.
278
+ [3] Yuval Alaluf, Or Patashnik, and Daniel Cohen-Or. Restyle: A residual-based stylegan encoder via iterative refinement. In ICCV, pages 6691–6700, 2021.
279
+ [4] Yuval Alaluf, Omer Tov, Ron Mokady, Rinon Gal, and Amit Bermano. Hyperstyle: Stylegan inversion with hypernetworks for real image editing. In CVPR, pages 18511-18521, 2022.
280
+ [5] Tadas Baltrusaitis, Peter Robinson, and Louis-Philippe Morency. Openface: An open source facial behavior analysis toolkit. In WACV, pages 1–10, 2016.
281
+ [6] Jianmin Bao, Dong Chen, Fang Wen, Houqiang Li, and Gang Hua. Towards open-set identity preserving face synthesis. In CVPR, pages 6713-6722, 2018.
282
+ [7] Volker Blanz, Kristina Scherbaum, Thomas Vetter, and Hans-Peter Seidel. Exchanging faces in images. Comput. Graph. Forum, 23(3):669-676, 2004.
283
+ [8] Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. In SIGGRAPH, pages 187-194, 1999.
284
+ [9] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In ICLR, 2019.
285
+ [10] Shengqu Cai, Anton Obukhov, Dengxin Dai, and Luc Van Gool. Pix2nerf: Unsupervised conditional $\pi$ -gan for single image to neural radiance fields translation. In CVPR, pages 3971-3980, 2022.
286
+ [11] Junyi Cao, Chao Ma, Taiping Yao, Shen Chen, Shouhong Ding, and Xiaokang Yang. End-to-end reconstruction-classification learning for face forgery detection. In CVPR, pages 4103-4112, 2022.
287
+ [12] Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J. Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. Efficient geometry-aware 3d generative adversarial networks. In CVPR, pages 16123-16133, 2022.
288
+ [13] Eric R. Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. Pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In CVPR, pages 5799-5809, 2021.
289
+ [14] Renwang Chen, Xuanhong Chen, Bingbing Ni, and Yanhao Ge. Simswap: An efficient framework for high fidelity face swapping. In ACMMM, pages 2003-2011, 2020.
290
+ [15] Yi-Ting Cheng, Virginia Tzeng, Yu Liang, Chuan-Chang Wang, Bing-Yu Chen, Yung-Yu Chuang, and Ming Ouhyoung. 3d-model-based face replacement in video. In SIGGRAPH, 2009.
291
+ [16] Edo Collins, Raja Bala, Bob Price, and Sabine Süsstrunk. Editing in style: Uncovering the local semantics of gans. In CVPR, pages 5770-5779, 2020.
292
+
293
+ [17] Antonia Creswell and Anil Anthony Bharath. Inverting the generator of a generative adversarial network. IEEE Trans. Neural Networks Learn. Syst., 30(7):1967-1974, 2019.
294
+ [18] DeepFakes. https://github.com/ondyari/FaceForensics/tree/master/dataset/DeepFakes. Accessed:2022-10-18.
295
+ [19] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In CVPR, pages 4690-4699, 2019.
296
+ [20] Yu Deng, Jiaolong Yang, Jianfeng Xiang, and Xin Tong. GRAM: generative radiance manifolds for 3d-aware image generation. In CVPR, pages 10663-10673, 2022.
297
+ [21] Yu Deng, Jiaolong Yang, Sicheng Xu, Dong Chen, Yunde Jia, and Xin Tong. Accurate 3d face reconstruction with weakly-supervised learning: From single image to image set. In CVPRW, pages 285-295, 2019.
298
+ [22] Yao Feng, Haiwen Feng, Michael J. Black, and Timo Bolkart. Learning an animatable detailed 3d face model from in-the-wild images. ACM Trans. Graph., 40(4):88:1-88:13, 2021.
299
+ [23] Gege Gao, Huaibo Huang, Chaoyou Fu, Zhaoyang Li, and Ran He. Information bottleneck disentanglement for identity swapping. In CVPR, pages 3404-3413, 2021.
300
+ [24] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial networks. Commun. ACM, 63(11):139–144, 2020.
301
+ [25] Jiatao Gu, Lingjie Liu, Peng Wang, and Christian Theobalt. Stylenerf: A style-based 3d aware generator for high-resolution image synthesis. In ICLR, 2022.
302
+ [26] Shanyan Guan, Ying Tai, Bingbing Ni, Feida Zhu, Feiyue Huang, and Xiaokang Yang. Collaborative learning for faster stylegan embedding. CoRR, abs/2007.01758, 2020.
303
+ [27] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. In ICLR, 2018.
304
+ [28] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, pages 4401-4410, 2019.
305
+ [29] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In CVPR, pages 8107-8116, 2020.
306
+ [30] Ira Kemelmacher-Shlizerman. Transfiguring portraits. ACM Trans. Graph., 35(4):94:1-94:8, 2016.
307
+ [31] Iryna Korshunova, Wenzhe Shi, Joni Dambre, and Lucas Theis. Fast face-swap using convolutional neural networks. In ICCV, pages 3697-3705, 2017.
308
+ [32] Lingzhi Li, Jianmin Bao, Hao Yang, Dong Chen, and Fang Wen. Faceshifter: Towards high fidelity and occlusion aware face swapping. CoRR, abs/1912.13457, 2019.
309
+ [33] Connor Z. Lin, David B. Lindell, Eric R. Chan, and Gordon Wetzstein. 3d GAN inversion for controllable portrait image animation. CoRR, abs/2203.13441, 2022.
310
+ [34] Yuan Lin, Shengjin Wang, Qian Lin, and Feng Tang. Face swapping under large pose variations: A 3d model based approach. In ICME, pages 333-338, 2012.
311
+
312
+ [35] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In ICCV, 2015.
313
+ [36] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, pages 405-421, 2020.
314
+ [37] Saleh Mosaddegh, Loïc Simon, and Frédéric Jurie. Photorealistic face de-identification by aggregating donors' face components. In ACCV, pages 159–174, 2014.
315
+ [38] Jacek Naruniec, Leonhard Helminger, Christopher Schroers, and Romann M. Weber. High-resolution neural face swapping for visual effects. Comput. Graph. Forum, 39(4):173-184, 2020.
316
+ [39] Ryota Natsume, Tatsuya Yatagawa, and Shigeo Morishima. Fsnet: An identity-aware generative model for image-based face swapping. In ACCV, pages 117-132, 2018.
317
+ [40] Ryota Natsume, Tatsuya Yatagawa, and Shigeo Morishima. RSGAN: face swapping and editing using face and hair representation in latent spaces. In SIGGRAPH, pages 69:1-69:2, 2018.
318
+ [41] Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. Hologan: Unsupervised learning of 3d representations from natural images. In ICCV, pages 7587–7596, 2019.
319
+ [42] Yuval Nirkin, Yosi Keller, and Tal Hassner. FSGAN: subject agnostic face swapping and reenactment. In ICCV, pages 7183-7192, 2019.
320
+ [43] Yuval Nirkin, Iacopo Masi, Anh Tuan Tran, Tal Hassner, and Gérard G. Medioni. On face segmentation, face swapping, and face perception. In AFGR, 2018.
321
+ [44] Yotam Nitzan, Amit Bermano, Yangyan Li, and Daniel Cohen-Or. Face identity disentanglement via latent space mapping. ACM Trans. Graph., 39(6):225:1-225:14, 2020.
322
+ [45] Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher-Shlizerman. Stylesdf: High-resolution 3d-consistent image and geometry generation. In CVPR, pages 13493-13503, 2022.
323
+ [46] Elad Richardson, Yuval Alaluf, Or Patashnik, Yotam Nitzan, Yaniv Azar, Stav Shapiro, and Daniel Cohen-Or. Encoding in style: A stylegan encoder for image-to-image translation. In CVPR, pages 2287–2296, 2021.
324
+ [47] Daniel Roich, Ron Mokady, Amit H. Bermano, and Daniel Cohen-Or. Pivotal tuning for latent-based editing of real images. TOG, pages 1–13, 2022.
325
+ [48] Arun Ross and Asem A. Othman. Visual cryptography for biometric privacy. IEEE Trans. Inf. Forensics Secur., 6(1):70-81, 2011.
326
+ [49] Nataniel Ruiz, Eunji Chong, and James M. Rehg. Fine-grained head pose estimation without keypoints. In CVPR, pages 2074-2083, 2018.
327
+ [50] Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. GRAF: generative radiance fields for 3d-aware image synthesis. In NeurIPS, 2020.
328
+ [51] Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, and Daniel Cohen-Or. Designing an encoder for stylegan image manipulation. ACM Trans. Graph., 40(4):133:1-133:14, 2021.
329
+
330
+ [52] Tengfei Wang, Yong Zhang, Yanbo Fan, Jue Wang, and Qifeng Chen. High-fidelity gan inversion for image attribute editing. In CVPR, pages 11369-11378, 2022.
331
+ [53] Z. Wang, E.P. Simoncelli, and A.C. Bovik. Multiscale structural similarity for image quality assessment. In ACSSC, 2003.
332
+ [54] Less Wright. Ranger - a synergistic optimizer. https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer. Accessed: 2022-9-18.
333
+ [55] Yangyang Xu, Bailin Deng, Junle Wang, Yanqing Jing, Jia Pan, and Shengfeng He. High-resolution face swapping via latent semantics disentanglement. In CVPR, pages 7632-7641, 2022.
334
+ [56] Yangyang Xu, Bailin Deng, Junle Wang, Yanqing Jing, Jia Pan, and Shengfeng He. High-resolution face swapping via latent semantics disentanglement. In CVPR, pages 7632-7641, 2022.
335
+ [57] Zhiliang Xu, Hang Zhou, Zhibin Hong, Ziwei Liu, Jiaming Liu, Zhizhi Guo, Junyu Han, Jingtuo Liu, Errui Ding, and Jingdong Wang. Styleswap: Style-based generator empowers robust face swapping. In ECCV, pages 661-677, 2022.
336
+ [58] Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, pages 586-595, 2018.
337
+ [59] Jun-Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A. Efros. Generative visual manipulation on the natural image manifold. In ECCV, pages 597-613, 2016.
338
+ [60] Yuhao Zhu, Qi Li, Jian Wang, Cheng-Zhong Xu, and Zhenan Sun. One shot face swapping on megapixels. In CVPR, pages 4834-4844, 2021.
3dawarefaceswapping/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:923b60d6f85fef095c5b809511942b3320281a48de7bb5a035beeb5a9fdcd899
3
+ size 577460
3dawarefaceswapping/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:073b49528cec716ba779c66f9c350d3a410d2274cad701c3bd5b7331e24f66c1
3
+ size 394314
3dawarefaciallandmarkdetectionviamultiviewconsistenttrainingonsyntheticdata/4aaf53b5-ffe9-4822-bbbc-9f293082f284_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2416d7ac3c69f16051e95e759ec54c178d77bba14394fc88422fa64740c432a8
3
+ size 86232
3dawarefaciallandmarkdetectionviamultiviewconsistenttrainingonsyntheticdata/4aaf53b5-ffe9-4822-bbbc-9f293082f284_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68a65cd26872d97335327613039e2dc11a05bf961ea01c25a1da19f5d0778c27
3
+ size 113025
3dawarefaciallandmarkdetectionviamultiviewconsistenttrainingonsyntheticdata/4aaf53b5-ffe9-4822-bbbc-9f293082f284_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1034b4b2767e62ba3c95a94f26b0be06635ae1de9b1f5422b6d2aef956499555
3
+ size 6587817
3dawarefaciallandmarkdetectionviamultiviewconsistenttrainingonsyntheticdata/full.md ADDED
@@ -0,0 +1,359 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3D-aware Facial Landmark Detection via Multi-view Consistent Training on Synthetic Data
2
+
3
+ Libing Zeng $^{1*}$ , Lele Chen $^{2}$ , Wentao Bao $^{3*}$ , Zhong Li $^{2}$ , Yi Xu $^{2}$ , Junsong Yuan $^{4}$ , Nima K. Kalantari $^{1}$ $^{1}$ Texas A&M University, $^{2}$ OPPO US Research Center, InnoPeak Technology, Inc,
4
+ $^{3}$ Michigan State University, $^{4}$ University at Buffalo
5
+
6
+ ![](images/78e916daa4bd05091dac17e234910d011ec6556475c25fe43fb6d8e37bc3162e.jpg)
7
+ (a) Multi-view Inconsistency
8
+
9
+ ![](images/4919b56620ae21697831b195c4fa557750b5cc1fe3d2388aaea65720dc521e69.jpg)
10
+ (b) DAD-3DNet
11
+
12
+ ![](images/57c47b59d38b7da15bb14f7590b3291a5ba60eda6fd9368720dcbec5a98be4e4.jpg)
13
+ (c) DAD-3DNet+ (Ours)
14
+ Figure 1. We plot the landmark annotations labeled by different annotators with different colors in view #1 of (a). Accurate annotation of non-frontal faces with large angles like view #1 is challenging. This is a major problem since small differences between annotated landmarks in view #1, becomes substantially magnified when projected to view #2. Training a system on such datasets could lead to poor landmark detection accuracy, as shown in (b). We address this issue by proposing a 3D-aware optimization module that enforces multi-view consistency. We show the landmark detection improvement in (c). Magnified insets in (b) and (c) are shown in (d). After refined by the proposed 3D-aware learning, the detected facial landmark is better aligned with the identity.
15
+
16
+ ![](images/e682e867ccf468210ab3cdbd5f548fd4edc26a0e75ce8c0f95aa2f6c76a46a88.jpg)
17
+ (d)
18
+
19
+ ![](images/76baf03b0bfbec6487cbc3e3ca9a8d82f8dea50c47b0094375bfffb23984ec43.jpg)
20
+
21
+ # Abstract
22
+
23
+ Accurate facial landmark detection on wild images plays an essential role in human-computer interaction, entertainment, and medical applications. Existing approaches have limitations in enforcing 3D consistency while detecting 3D/2D facial landmarks due to the lack of multi-view in-the-wild training data. Fortunately, with the recent advances in generative visual models and neural rendering, we have witnessed rapid progress towards high quality 3D image synthesis. In this work, we leverage such approaches to construct a synthetic dataset and propose a novel multiview consistent learning strategy to improve 3D facial landmark detection accuracy on in-the-wild images. The proposed 3D-aware module can be plugged into any learning-based landmark detection algorithm to enhance its accuracy. We demonstrate the superiority of the proposed plug
24
+
25
+ in module with extensive comparison against state-of-the-art methods on several real and synthetic datasets.
26
+
27
+ # 1. Introduction
28
+
29
+ Accurate and precise facial landmark plays a significant role in computer vision and graphics applications, such as face morphing [54], facial reenactment [58], 3D face reconstruction [17, 18, 30], head pose estimation [38], face recognition [1, 10, 13, 19, 32, 41, 71], and face generation [11, 21, 60, 69]. In these applications, facial landmark detection provides great sparse representation to ease the burden of network convergence in different training stages and is often used as performance evaluation metric. For instance, as a facial prior, it provides good initialization for subsequent training [66, 67, 69, 76], good intermediate representation to bridge the gap between different modalities for content generation [11, 27, 51, 79], loss terms which reg-
30
+
31
+ ularize the facial expression [11, 52], or evaluation metrics to measure the facial motion quality [53, 73, 78].
32
+
33
+ The aforementioned applications require the estimated facial landmarks to be accurate even with significantly varied facial appearance under different identities, facial expressions, and extreme head poses. Tremendous efforts have been devoted to address this problem [15, 22-24, 29, 34, 40, 56, 63, 74, 75, 77, 82, 84]. These approaches often rely on manually annotated large-scale lab-controlled or in-the-wild image datasets [4, 34] to handle various factors such as arbitrary facial expressions, head poses, illumination, facial occlusions, etc.
34
+
35
+ However, even with the high cost of human labeling, consistent and accurate manual annotation of landmarks remains challenging [22, 23, 34]. It is very difficult, if not impossible, to force a person to annotate the facial landmark keypoints at the same pixel locations for faces of different poses, let alone different annotators under different labeling environments. Such annotation inconsistency and inaccuracy in training images are often the killing factor to learn an accurate landmark localization model. This is particularly a major problem in non-frontal faces where annotation becomes extremely challenging. As shown in Fig. 1(a) a small annotation variation in view #1, results in a significant inaccuracy in view #2. This multi-view inconsistency and inaccuracy can ultimately lead to poor landmark detection accuracy, especially for facial images with extreme head pose.
36
+
37
+ To mitigate this annotation inconsistency and inaccuracy issue, we propose to learn facial landmark detection by enforcing multi-view consistency during training. Given the images of the same facial identity captured with different head poses, instead of detecting facial landmark at each separate facial image, we propose a multi-view consistency supervision to locate facial landmark in a holistic 3D-aware manner. To enforce multi-view consistency, we introduce self-projection consistency loss and multi-view landmark loss in training. We also propose an annotation generation procedure to exploit the merits of lab-controlled data (e.g., multi-view images, consistent annotations) and in-the-wild data (e.g., wide range of facial expressions, identities). Thanks to this synthetic data, our method does not rely on human annotation to obtain the accurate facial landmark locations. Therefore, it alleviates the problem of learning from inaccurate and inconsistent annotations.
38
+
39
+ We formulate our solution as a plug-in 3D aware module, which can be incorporated into any facial landmark detector and can boost a pre-trained model with higher accuracy and multi-view consistency. We demonstrate the effectiveness of our approach through extensive experiments on both synthetic and real datasets. The main contributions of our work are as follows:
40
+
41
+ - We show, for the first time, how to combine the merits
42
+
43
+ of lab captured face image data (e.g., multi-view) and the in-the-wild face image datasets (e.g., appearance diversity). Using our proposed approach we produce a large-scale synthetic, but realistic, multi-view face dataset, titled DAD-3DHeads-Syn.
44
+
45
+ - We propose a novel 3D-aware optimization module, which can be plugged into any learning-based facial landmark detection methods. By refining an existing landmark detection algorithm using our optimization module, we are able to improve its accuracy and multiview consistency.
46
+ - We demonstrate the performance improvements of our module built on top multiple baseline methods on simulated dataset, lab-captured datasets, and in-the-wild datasets.
47
+
48
+ # 2. Related Work
49
+
50
+ In this section, we review face landmark datasets and detection algorithms that are most related to our approach. We also provide a brief review of data simulation tools related to our work.
51
+
52
+ # 2.1. Face Landmark Detection Dataset
53
+
54
+ Lab-controlled dataset. Datasets under "controlled" conditions [8, 20, 36, 39, 46, 48, 64, 65, 72] typically collect video/images from indoor scenarios with certain restrictions, e.g. pre-defined expressions, head poses, etc. For example, FaceScape dataset [65] contains 938 individuals and each with 20 expressions using an array of 68 cameras under controlled illumination and positions. Thus, it contains aligned and consistent multi-view images and facial landmark annotations. However, the identities, poses, and expressions are limited. In addition, the environment conditions are fully controlled. These result in limited generalization capability of models trained on this dataset. Moreover, the annotation workflow of such a dataset is expensive and hard to scale.
55
+
56
+ In-the-wild dataset. The boom of internet image sharing has enabled the creation of many "in-the-wild" facial landmark datasets [3,7,32,49,85], collected from the web, to facilitate facial landmark detection research. However, manually annotating facial landmarks on in-the-wild images is a time-consuming process and not scalable. Zhu et al. [83] release 300W-LP by extending the original 300W dataset with synthetic images with extreme pose through image profiling of frontal pose images. However, the novel view images are generated by simply applying rotation matrix on the original images, which leads to limited view range and poor image quality. Meanwhile, 300W-LP lacks diversity in face appearance and expression because of the intrinsic limitations of 300W. Recently, Martyniuk et al. [34] introduce a
57
+
58
+ new dataset, DAD-3DHeads, by proposing a novel annotation scheme. Specifically, their approach allows the annotator to adjust the landmarks by looking at how well the mesh, generated from the landmarks, fits the input image. The proposed scheme addresses the problems exhibited by existing labeling tools, such as "guessing" the positions of the correct landmarks for invisible parts of the head, thus enabling accurate annotations. DAD-3DHeads dataset contains 44,898 in-the-wild images, covering extreme facial expressions, poses, and challenging illuminations. However, the DAD-3DHeads still has some drawbacks. First, even with the mesh fitting guidance, the annotations can be inaccurate. As shown in Fig. 1 (a), even a small inaccuracy in one view could result in a significant inconsistency when projected to another view. This inconsistency could negatively affect the training of the detection network. Second, since the depth is estimated by FLAME [33], annotation accuracy is limited by the FLAME model. Third, this dataset lacks multi-view images, and thus cannot be used to enforce multi-view consistency.
59
+
60
+ # 2.2. Data Simulation
61
+
62
+ Simulation [26,28,35,42,44,45,50,59,61,62,70] is a useful tool in situations where training data for learning-based methods is expensive to annotate or even hard to acquire. For example, Zeng et al. [70] and Richardson et al. [42] use 3D Morphable Model (3DMM) to render training data with different lighting conditions, identities, expressions, and texture basis elements for reconstructing detailed facial geometry. However, the simulated images produced by these approaches lack realism and have severe domain gaps compared with real-world captures, limiting their usage. Bak et al. [2] adapt synthetic data using a CycleGAN [81] with a regularization term for preserving identities. Ayush et al. [57] use the images and latent code generated by StyleGAN [81] to train a controllable portrait image generation model. However, it is hard to control the attribute consistencies of images simulated by generative models, which limits the usage of the generated datasets.
63
+
64
+ # 2.3. Face Landmark Detection Algorithms
65
+
66
+ Traditional facial landmark detection methods leverage either holistic facial appearance information [12], or the global facial shape patterns [31, 85]. They yield reasonable results for images captured in lab-controlled environments with frontal faces and good lighting, however the performance on most of in-the-wild images is inferior.
67
+
68
+ Recently, deep learning-based algorithms have made promising progress on 2D facial landmark localization [15, 22-24,29,34,40,56,63,74,75,77,82,84] in terms of robustness, generalizability, and accuracy. FAN [6] constructs, for the first time, a very strong baseline by combining a state-of-the-art residual block and a state-of-the-art architecture
69
+
70
+ <table><tr><td>Dataset Type</td><td>Lab-Controlled</td><td>In-the-wild</td><td>Ours</td></tr><tr><td rowspan="4">Examples</td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td></tr><tr><td>In-the-wild</td><td>×</td><td>√</td><td>√</td></tr><tr><td>Large Scale</td><td>×</td><td>√</td><td>√</td></tr><tr><td>Balanced</td><td>√</td><td>×</td><td>√</td></tr><tr><td>Multiview Consistent</td><td>√</td><td>×</td><td>√</td></tr><tr><td>Annotation Consistent</td><td>√</td><td>×</td><td>√</td></tr><tr><td>Scalable</td><td>×</td><td>×</td><td>√</td></tr></table>
71
+
72
+ ![](images/094409b388bb96123759f3b785734f8945c95749b6beea7c0ccb43087beaf6f2.jpg)
73
+ Figure 2. The feature comparison of different type of datasets. For example, FaceScape [65] and MultiFace [64] are lab-controlled datasets, while 300W [47], AFLW2000 [68], and DAD-3DHeads [34] are in-the-wild datasets.
74
+ Figure 3. The proposed data simulation pipeline.
75
+
76
+ for landmark localization and trains it on a very large yet synthetically expanded 2D facial landmark dataset. To address self-occlusion and large appearance variation, Zhu et al. [82] propose a cascaded convolutional neural network and optimized weighted parameter distance cost loss function to formulate the priority of 3DMM parameters during training instead of predicting facial landmark keypoints. To further address the problems of shape reconstruction and pose estimation simultaneously, Martyniuk et al. propose an end-to-end trained DAD-3DNet [34] to regress 3DMM parameters and recover the 3D head geometry with differential FLAME decoder. However, due to the intrinsic limitation of the manually annotated in-the-wild dataset, the detection results are affected by the annotation noise and the 3D inconsistency of the single view images. In this paper, we mainly focus on improving the performance of deep-learning based methods.
77
+
78
+ # 3. Balanced and Realistic Multi-view Face Dataset
79
+
80
+ We believe there are five desired properties that a good facial landmark dataset should fulfill: (1) contain full range of multi-view images; (2) bridge the domain gap between the dataset and the real-world captured images; (3) contain diverse facial appearance including different poses, expressions, illuminations, and identities; (4) have consistent and accurate annotations across the whole dataset; (5) be
81
+
82
+ easy to obtain and scalable. The existing datasets can are either lab-controlled captures [64, 65] or in-the-wild collected [34, 47, 68]. Unfortunately, these datasets lack one or more desired attributes. In contrast, our dataset meets all of these criteria (Fig. 2).
83
+
84
+ Unlike previous graphics or generative model-based data synthesis approaches described in Sec. 2.2, we propose a novel facial dataset simulation scheme by leveraging Neural Radiance Field (NeRF) [37] to facilitate training a facial landmark detection network. Fig. 3 shows our dataset creation pipeline. We generate multiview images with consistent landmarks using a single in-the-wild image along with annotated landmark as input.
85
+
86
+ Specifically, we choose DAD-3DHeads [34] as our initial dataset since it contains images under a variety of extreme poses, facial expressions, challenging illuminations, and severe occlusions cases. Given an image and its landmarks from this dataset, our goal is to reconstruct multiview images with their corresponding landmarks. Inspired by GAN inversion [80], we first fit a latent code to each image in DAD-3DHeads datasets using EG3D [9] as decoder by following Pivotal Tuning Inversion (PTI) [43]. Note that, EG3D GAN inversion requires the camera pose of the input image, which we estimate using Deep3DFace [14]. Then we can use EG3D to decode the optimized latent code to NeRF. Next, we use volume rendering on the NeRF with 512 uniformly sampled camera views from a large view range, producing 512 multi-view images.
87
+
88
+ To obtain the landmarks for each image, we start with the well-annotated groundtruth 2D landmarks of the original images from the DAD-3DHeads dataset. Then we use the estimated camera pose of the input image to unproject the annotated landmarks to 3D space. At last, we project the 3D landmarks to the 512 sampled camera views to obtain landmark annotation on the simulated views. The simulated dataset not only inherits the merits of DAD-3DHeads (e.g. diverse identities, expressions, poses, and illuminations), but also comes with a lot of new features (e.g., balanced head pose, consistent annotation, and multi-view images). In total, there are 2,150,400 training pairs and 204,800 testing pairs in our extended dataset, called DAD-3DHeadsSyn.
89
+
90
+ # 4. 3D-Aware Multi-view Consistency Training
91
+
92
+ # 4.1. Overview
93
+
94
+ The state-of-the-art landmark detectors [5, 34] can output reasonable results on in-the-wild images. However, we may observe that the predicted landmark are floating on the face surface instead of fitting the face perfectly in a lot of cases. We can easily verify if the detected landmark fits the face by projecting the detected landmark to another view (see Fig. 1(a)). Armed by this observation of multi-view in
95
+
96
+ Algorithm 1 3D-Aware Plug-in Module.
97
+ 1: Input: pretrained detector $F$ with weights $\theta$ , $M$ single-view images $I_{1,\dots,M} \in \mathcal{D}$ along with ground truth landmark $L_{1,\dots,M}$ , paired $N$ multi-view images $V_{1,\dots,N} \in \hat{\mathcal{D}}$ along with ground truth landmark $L_{1,\dots,N}$ .
98
+ 2: Output: detector $F$ with updated weights $\theta^{*}$
99
+ 3: Initialization: set $\theta$ to pre-trained weights
100
+ 4: Unfreeze $\theta$
101
+ 5: for number of iterations do
102
+ 6: Output predicted landmarks $\hat{L}_{1,\dots,N}$ for each view.
103
+ 7: Randomly sample $P$ landmarks from them, $(1 < P \leq N)$ .
104
+ 8: Cast the landmarks into world space and estimate the approximate 3D landmark $\dot{L}$ using Eq. 2, 3, 4, 5
105
+ 9: Project $\dot{L}$ onto the image planes of remaining $Q$ views $(Q = N - P)$ using Eq. 6, 7
106
+ 10: Calculate Total Loss $\mathcal{L}$ using Eq. 11
107
+ 11: $\theta^{*} \gets Adam\{\mathcal{L}\}$
108
+
109
+ consistency and inaccuracy, we propose a novel 3D-Aware training module $\mathcal{R}$ to further improve the performance of baseline detection algorithm $F$ .
110
+
111
+ Given a facial landmark detection network $F_{\theta}(\cdot)$ pretrained on dataset $\mathcal{D}$ , the proposed module $\mathcal{R}$ further refines the network parameters $\theta$ by leveraging our simulated DAD-3DHeads-Syn dataset $\hat{\mathcal{D}}$ in addition to the original dataset $\mathcal{D}$ . Our module $\mathcal{R}$ can be formulated as:
112
+
113
+ $$
114
+ F _ {\theta^ {*}} \leftarrow \mathcal {R} \left(F _ {\theta}, X, V _ {1, \dots , N}\right), X \in \mathcal {D}, V _ {1, \dots , N} \in \hat {\mathcal {D}}, \tag {1}
115
+ $$
116
+
117
+ where $X$ is the image batch sampled from $\mathcal{D}$ and $V_{1,\dots,N}$ are $N$ multi-view images sampled from $\hat{\mathcal{D}}$ . We refine the network parameters $\theta$ through exploring 3D information among multi-view images and applying a novel projection consistency during the fine-tuning process. Our module $\mathcal{R}$ does not result in any new network parameters and can be plugged into any learning-based network. We show the training protocol in Alg. 1.
118
+
119
+ # 4.2. Multi-view Consistency Supervision
120
+
121
+ We propose a novel multi-view supervision to force the baseline network to learn to be 3D consistent. To simplify notation, we ignore the batch dimension and fixed camera intrinsic matrix. For every training iteration, we randomly sample $N$ image and landmark pairs $\{V,\mathrm{L}\}_{1,\dots,N}$ from $\hat{\mathcal{D}}$ and $M$ image and landmark pairs $\{I,\mathrm{L}\}_{1,\dots,M}$ from initial dataset $\mathcal{D}^*$ .
122
+
123
+ We pass $V_{1,\dots,N}$ to the baseline network $F$ to obtain predicted landmarks $\hat{\mathrm{L}}_{1,\dots,N}$ which are shown with green
124
+
125
+ ![](images/0384782a59611be0512f68c552e09ccf4cbd403b35b27bc352572b02ffebd08f.jpg)
126
+ Figure 4. Multi-view Consistency Supervision. Predicted landmarks $\hat{\mathbf{L}}_{1,\dots,N}$ , estimated 3D landmark $\dot{\mathbf{L}}$ , projected landmarks $\tilde{\mathbf{L}}_{1,\dots,Q}$ , and ground truth landmarks $L$ are denoted as green, blue, red, and yellow points respectively. The processes of calculating 3D landmark $\dot{\mathbf{L}}$ and the projection procedure are shown as light blue and pink arrows, respectively. $\mathcal{L}_{\mathrm{Self - Cons}}$ and $\mathcal{L}_{\mathrm{Multiview}}$ are represented as red and light green lines, respectively.
127
+
128
+ points in Fig. 4. We then randomly select $P$ predicted landmarks $\hat{\mathrm{L}}_{1,\dots,P} \in \mathbb{R}^{P \times 68 \times 2}$ from $\hat{\mathrm{L}}_{1,\dots,N}$ to calculate the "canonical" 3D landmark $\dot{\mathrm{L}} \in \mathbb{R}^{68 \times 3}$ , as shown by the blue point in Fig. 4. We calculate each keypoint of the "canonical" 3D landmark $\dot{\mathrm{L}}^{(k)} \in \mathbb{R}^3, 1 \leq k \leq 68$ through Direct Linear Transformation (DLT) [16, 25], as follows:
129
+
130
+ $$
131
+ \mu_ {p} = \mathbb {M} _ {p} [ 0,: ] - \mathbb {M} _ {p} [ 2,: ] \cdot \hat {\mathrm {L}} _ {p} ^ {k} [ 0 ] \in \mathbb {R} ^ {4}, \qquad (2)
132
+ $$
133
+
134
+ $$
135
+ v _ {p} = \mathbb {M} _ {p} [ 1,: ] - \mathbb {M} _ {p} [ 2,: ] \cdot \hat {\mathrm {L}} _ {p} ^ {k} [ 1 ] \in \mathbb {R} ^ {4}, \qquad (3)
136
+ $$
137
+
138
+ $$
139
+ \mathbf {A} = \left[ \mu_ {1} \mid \mu_ {2} \mid \dots \mid \mu_ {p} \mid v _ {1} \mid v _ {2} \mid \dots \mid v _ {p} \right] ^ {T} \in \mathbb {R} ^ {2 P \times 4}, (4)
140
+ $$
141
+
142
+ $$
143
+ \dot {\mathbf {L}} ^ {(k)} = \left(\mathbf {A} [:,: 3 ] ^ {T} \quad \mathbf {A} [:,: 3 ]\right) ^ {- 1} \mathbf {A} [:,: 3 ] ^ {T} (- \mathbf {A} [:,: 3 ]), \tag {5}
144
+ $$
145
+
146
+ where, $p, 1 \leq p \leq P$ , is the index of views, and $\mathbb{M}_{1,\dots,P}$ are the corresponding camera extrinsic matrices which are pre-defined for view synthesis during volume rendering (see Sec.3). Moreover, $\mathbb{M}_p[i,:]$ indicates the i-th row of $\mathbb{M}_p$ , $\mathbf{A}(:,i]$ indicates columns 0 to $i - 1$ of $\mathbf{A}$ , and $\mathbf{A}(:,i]$ indicates the $i$ -th column of $\mathbf{A}$ . By Eq. 2 and Eq. 3, we first calculate the projection constraints for $\dot{\mathrm{L}}_{(k)}$ , i.e., $\mu_p[:3] \cdot \dot{\mathrm{L}}^{(k)} + \mu_p[3] = 0$ , where ‘’ indicates the dot product. Then we stack all of the constraints into $\mathbf{A} \in \mathbb{R}^{2P \times 4}$ by Eq. 4. At last, we compute $\dot{\mathrm{L}}^{(k)}$ with a least square approach (Eq. 5).
147
+
148
+ After obtaining the "canonical" 3D landmark $\dot{\mathrm{L}}$ , we project it onto the image planes of rest of $Q = N - P$ views to obtain the projected landmark $\tilde{\mathrm{L}}_{1,\dots,Q}$ , shown as red points in Fig. 4, by the following equations:
149
+
150
+ $$
151
+ s = \mathbb {M} _ {q} [:,: 3 ] \dot {\mathrm {L}} ^ {(k)} + \mathbb {M} _ {q} [:,: 3 ] \in \mathbb {R} ^ {3 \times 1}, \tag {6}
152
+ $$
153
+
154
+ $$
155
+ \tilde {\mathrm {L}} _ {q} ^ {(k)} = \left[ \begin{array}{c} s [ 0 ] / s [ 2 ] \\ s [ 1 ] / s [ 2 ] \end{array} \right] \in \mathbb {R} ^ {2 \times 1}, \tag {7}
156
+ $$
157
+
158
+ where, in our case, $1 \leq q \leq Q$ . Eq. 6 transforms 3D landmark from "canonical" space to the camera space of view $q$ , and Eq. 7 transforms it from camera space to image space.
159
+
160
+ Self-Projection Consistency Loss. Since all $M$ views are sampled from one NeRF with different camera views, the predicted landmarks $\hat{\mathrm{L}}_{1,\dots,Q}$ and the projected landmarks $\tilde{\mathrm{L}}_{1,\dots,Q}$ should be consistent. Therefore, we propose to minimize the error between the predicted and projected landmarks as follows:
161
+
162
+ $$
163
+ \mathcal {L} _ {\text {S e l f - C o n s}} = \sum_ {q = 1} ^ {Q} \| \hat {\mathrm {L}} _ {q} - \tilde {\mathrm {L}} _ {q} \| _ {1}. \tag {8}
164
+ $$
165
+
166
+ Mesh Consistency Loss* Besides the self-projection consistency, all the $N$ views also share one mesh topology in the canonical space. Therefore, we apply a mesh consistency loss in canonical space calculated by:
167
+
168
+ $$
169
+ \mathcal {L} _ {\text {M e s h - C o n s}} = \sum_ {n = 1} ^ {N} \| \hat {\mathrm {M}} _ {n} - \dot {\mathrm {M}} \| _ {2}, \tag {9}
170
+ $$
171
+
172
+ where $\hat{\mathbf{M}}_n$ is the predicted mesh of view $n$ in the canonical space, and $\hat{\mathbf{M}}$ is the ground truth mesh of the original reference image.
173
+
174
+ Multiview Landmark Loss. We also minimize the distance between the predicted 2D facial landmarks and the corresponding multi-view ground truth landmarks we obtained in Sec. 3, which are denoted as yellow points in Fig. 4. The loss can be formulated as follows:
175
+
176
+ $$
177
+ \mathcal {L} _ {\text {M u l t i v e w}} = \sum_ {q = 1} ^ {N} \| \hat {\mathrm {L}} _ {q} - \mathrm {L} _ {q} \| _ {1}. \tag {10}
178
+ $$
179
+
180
+ We also incorporate the original loss of the baseline method computed with the image and landmark pairs $\{I,L\}_{1,\dots ,M}$ from dataset $\mathcal{D}$ to stabilize our 3D-aware training. The overall loss is:
181
+
182
+ $$
183
+ \mathcal {L} = \lambda_ {1} \mathcal {L} _ {\text {S e l f - C o n s}} + \lambda_ {2} \mathcal {L} _ {\text {M e s h - C o n s}} + \lambda_ {3} \mathcal {L} _ {\text {M u l t i v i e w}} + \mathcal {L} _ {\text {o r i g i n a l}}, \tag {11}
184
+ $$
185
+
186
+ where $\lambda_{1,2,3}$ are hyper parameters that control the contribution of each components. We set $\lambda_{1,2,3}$ to 0.1 empirically.
187
+
188
+ Note that our training is a plug-in module and can be incorporated into any existing facial landmark detector easily. For different pretrained models, we just need to change $\mathcal{L}_{\mathrm{original}}$ while the other novel loss components calculated on our balanced synthetic dataset $\mathcal{D}$ can be applied directly. We show this plug-in capability on top of different baseline methods (e.g., DAD-3DNet [34] and 3DDFA [22]), and demonstrate that our 3D-aware training indeed improves their performance (see Sec. 5).
189
+
190
+ Table 1. Facial landmark detection result (NME) on DAD-3DHeads [34], FaceScape [65], and MultiFace [64]. Lower values mean better results.
191
+
192
+ <table><tr><td>Method</td><td>DAD-3DHeads</td><td>FaceScape</td><td>MultiFace</td></tr><tr><td>FAN [6]</td><td>7.141</td><td>16.74</td><td>16.143</td></tr><tr><td>Dlib [31]</td><td>10.841</td><td>29.431</td><td>18.205</td></tr><tr><td>3DDFA-V2 [23]</td><td>2.926</td><td>6.853</td><td>5.942</td></tr><tr><td>3DDFA [22]</td><td>4.082</td><td>7.988</td><td>8.121</td></tr><tr><td>3DDFA+</td><td>3.784</td><td>7.425</td><td>7.305</td></tr><tr><td>DAD-3DNet [34]</td><td>2.599</td><td>6.681</td><td>5.786</td></tr><tr><td>DAD-3DNet+</td><td>2.503</td><td>6.050</td><td>5.480</td></tr></table>
193
+
194
+ # 5. Experiments
195
+
196
+ # 5.1. Experimental Settings
197
+
198
+ Training Details. We implement our algorithm in Pytorch and adopt ADAM to optimize the baseline networks. We run our 3D-aware training for 100 epochs with a batch size of 4, and a learning rate of $1 \times 10^{-4}$ on each baseline network. As to computational cost, fine-tuning DAD-3DNet take about and 16.25 hours on 4 NVIDIA RTX A6000 GPUs.
199
+
200
+ Dataset. Besides DAD-3DHeads, we use two additional datasets to conduct the evaluations.
201
+
202
+ - DAD-3DHeads [34] is the state-of-the-art in-the-wild 3D head dataset, which contains dense, accurate annotations, and diverse facial appearances. It consists of 44,898 images collected from various sources (37,840 in the training set, 4,312 in the validation set, and 2,746 in the test set).
203
+ - FaceScape [65] is a large-scale high-quality lab-controlled 3D face dataset, which contains 18,760 examples, captured from 938 subjects and each with 20 specific expressions.
204
+ - MultiFace [64] is a new multi-view, high-resolution human face dataset collected from 13 identities for neural face rendering.
205
+
206
+ Training and Testing Split. In all the experiments, we only refine the baseline models with the training set of our DAD-3DHeads-Syn and their original training dataset. We use the test sets of DAD-3DHeads-Syn and DAD-3DHeads [34], and use the full datasets of FaceScape [65] and MultiFace [63] for performance evaluation. All the comparison methods have not been trained on the split test sets.
207
+
208
+ Evaluation Metrics. We evaluate the facial landmark distance by calculating the Normalized Mean Error (NME). We normalize the landmark error by dividing its image resolution instead of the eye distance [55], since all the test images are aligned with offline tools. We calculate the head
209
+
210
+ pose error by the absolute distance of the Euler angle values.
211
+
212
+ # 5.2. Quantitative Evaluation
213
+
214
+ Landmark Detection Results. The quantitative landmark detection results on DAD-3DHeads [34], FaceScape [65], and MultiFace [64] are shown in Tab. 1. We can find that the DAD-3DNet+ refined by our 3D-aware multi-view consistency training achieves the best performance on all three datasets. Moreover, according to the results of 3DDFA [22], 3DDFA+, DAD-3DNet [34], and DAD-3DNet+, we find that after refinement, the new models (3DDFA+ and DAD-3DNet+) achieve much better results than the baseline models. For example, the detection error of DAD-3DNet [34] drops 0.631 and 0.306, a $9\%$ and $5\%$ improvement, on FaceScape and MultiFace datasets, respectively. Similarly, we improve the 3DDFA [22] by 0.298 ( $7\%$ ), 0.563 ( $7\%$ ), and 0.816 ( $10\%$ ) on DAD-3DHeads, FaceScape and MultiFace datasets, respectively. We attribute the improvement to our proposed 3D aware multi-view training. One interesting phenomenon is that all the methods perform better on DAD-3DHeads dataset than the other two lab-captured datasets. We attribute this to the extreme head pose and challenging facial expressions in the other two datasets. We plot the head pose distribution of DAD-3DHeads (see supplementary materials) and find that distribution of head pose is not as uniform as the other two lab-controlled datasets.
215
+
216
+ Head Pose Estimation Results. Tab. 2 shows the head pose estimation error on DAD-3DHeads [34] and FaceScape [65]. Our DAD-3DNet+ achieves best performance in most metrics. Similar to the landmark results, we can also conclude that head pose detection accuracy of the baseline methods (3DDFA and DAD-3DNet) is improved by our 3D aware multi-view consistency (3DDFA+ and DAD-3DNet+). For example, after refinement, DAD-3DNet+ achieves $11.9\%$ and $18.8\%$ performance boosts in overall head pose error on DAD-3DHeads and FaceScape dataset, respectively.
217
+
218
+ # 5.3. Qualitative Evaluation
219
+
220
+ We fist show visual comparisons on images randomly sampled from DAD-3DHeads test set [34] in Fig. 5. The landmark predicted by our DAD-3DNet+ model fits the individual's face tighter than the other predictions. Furthermore, by comparing the third (3DDFA [22]) and forth columns (ours), we can see that refining model $(3\mathrm{DDFA}+)$ improves the landmark accuracy dramatically. Similar visual improvements can be found in sixth (DAD-3DNet) and seventh (DAD-3DNet+) columns as well. Comparing the sixth and seventh column, we can see that the refinement training drags and rotates the landmark in 3D space to better fit it to the individual's face surface. We attribute this abl
221
+
222
+ ![](images/7aac1a8c3b9941c1668256c37762e4d1d7cf20e416080d39a9115a187a821aa4.jpg)
223
+ Figure 5. The visual results of Dlib [31], FAN [5], 3DDFA [22], our refined 3DDFA+, 3DDFA-V2, DAD-3DNet [34], and our refined DAD-3DNet+ on images randomly sampled from DAD-3DHeads [34] testing set. We show the enlarged error region (while box) in the middle row.
224
+
225
+ Table 2. Head pose estimation results (head pose error) on DAD-3DHeads [34], FaceScape [65]. Lower values mean better results.
226
+
227
+ <table><tr><td></td><td colspan="4">DAD-3DHeads</td><td colspan="4">FaceScape</td></tr><tr><td></td><td>Pitch</td><td>Roll</td><td>Yaw</td><td>Overall</td><td>Pitch</td><td>Roll</td><td>Yaw</td><td>Overall</td></tr><tr><td>FAN [5]</td><td>9.765</td><td>5.376</td><td>6.390</td><td>7.177</td><td>8.774</td><td>4.895</td><td>6.556</td><td>6.742</td></tr><tr><td>Dlib [31]</td><td>13.352</td><td>11.799</td><td>14.654</td><td>13.268</td><td>17.861</td><td>12.663</td><td>19.548</td><td>16.691</td></tr><tr><td>3DDFA-V2 [23]</td><td>7.901</td><td>4.989</td><td>6.088</td><td>6.326</td><td>13.741</td><td>9.718</td><td>11.353</td><td>11.604</td></tr><tr><td>3DDFA [22]</td><td>9.895</td><td>7.977</td><td>8.996</td><td>8.956</td><td>20.789</td><td>18.145</td><td>19.692</td><td>19.752</td></tr><tr><td>3DDFA+</td><td>9.195</td><td>6.792</td><td>8.692</td><td>8.226</td><td>20.996</td><td>16.426</td><td>19.054</td><td>18.826</td></tr><tr><td>DAD-3DNet [34]</td><td>8.274</td><td>4.666</td><td>9.206</td><td>7.382</td><td>15.851</td><td>9.676</td><td>18.346</td><td>14.624</td></tr><tr><td>DAD-3DNet+</td><td>7.700</td><td>4.274</td><td>7.528</td><td>6.500</td><td>14.466</td><td>7.247</td><td>13.876</td><td>11.863</td></tr></table>
228
+
229
+ ity to our 3D-aware multi-view consistency training, which lets the refined model gain the better sense in 3D space, and therefore, improve the landmark detection results.
230
+
231
+ To further validate the improvement gained by the proposed 3D-aware multi-view consistency training, we show the visual results (Fig. 6) of 3DDFA [22], our refined 3DDFA+, DAD-3DNet [34], and our refined DAD-3DNet+ on images sampled from four different test sets. We can find that our proposed refinement improves the landmark detection results in the eye, mouth, and face contour regions, which usually contain more appearance dynamics than the other areas.
232
+
233
+ # 5.4. Performance Improvement Analysis
234
+
235
+ To systematically understand the source of improvement after refining the baseline methods (DAD-3DNet [34] and 3DDFA [22]) with our proposed 3D-aware multi-view consistency training, we further calculate and plot the landmark and head pose error improvements on DAD-3DHeads [34] (see Fig. 7). Instead of calculating the overall improved
236
+
237
+ error score, we split all the testing images into different groups according to their head pose value and calculate the improved error score within each group. We can find that the improvement by our training gets more obvious as the head pose gets more challenging. For example, the landmark error improvement (Fig. 7 upper section) using our method built on top of 3DDFA [22] increases from 0.12 to 0.71. Similarly, the head pose estimation error (Fig. 7 lower section) improvement using our method built on top of DAD-3DNet [34] increases from 0.02 to 2.7. We also show the detection result visualization in Fig. 8. We can see that from left to right, as the head pose increases, the error of the DAD-3DNet+ (second row) is more stable than the error (first row) of the DAD-3DNet. Base on this trend, we conclude that our proposed 3D-aware multi-view consistency training provides a more significant improvement over the baselines on images with larger head pose. This verifies our hypothesis that multi-view consistency training enables the network to learn 3D-aware information, which benefits the detection results on images with large head pose.
238
+
239
+ ![](images/f2ac3d6ce01550466943c2f9f8af1f2f6a4d21af6e231534947de31734b3a21c.jpg)
240
+ Figure 6. The visual comparisons between baseline methods and the refined methods on four testing sets. The left column and upper row list the dataset and method names, respectively. $^+$ denotes the model that has been refined by our 3D-aware training.
241
+
242
+ ![](images/eb3cbcd9dc8c8a1441823c5b341c0f8f352ccce72c74df1480e3a0358cf7b24d.jpg)
243
+
244
+ ![](images/26af171f93b204cdcd7c84063eda237916b5601d8ebf9d2f080cc61ffedc438e.jpg)
245
+ Figure 7. The landmark (top) and head pose (bottom) error improvement over DAD-3DNet [34] and 3DDFA [22] on images from different head pose ranges. The solid and dotted lines indicate DAD-3DNet [34] vs. DAD-3DNet+ (ours) and 3DDFA [22] vs. 3DDFA+ (ours).
246
+
247
+ # 5.5. Ablation Study
248
+
249
+ We conduct ablation study on FaceScape [65] to verify the importance of main components of our novel design. As shown in Tab. 3, we calculate NME of landmark and MAE of pose estimation in these ablation experiments. Based on these numbers, we can see the performance degrades
250
+
251
+ ![](images/af87e6bdb2c2cb2d76c4ffc07ba3a5063e33a0cefd29dff5abab44fd06970664.jpg)
252
+ Figure 8. The error visualization of DAD-3DNet [34] and our DAD-3DNet+ on MultiFace [64] dataset. The white and green dots are the ground truth and predicted landmarks, respectively. We use the red line to show the error distance. From left to right, the head pose increases gradually.
253
+
254
+ Table 3. Ablation Study on FaceScape [65]. The top 2 numbers are shown in bold.
255
+
256
+ <table><tr><td></td><td>Component</td><td>NME ↓</td><td>Pose ↓</td></tr><tr><td>1</td><td>full model (P=4)</td><td>6.050</td><td>11.863</td></tr><tr><td>2</td><td>w/o LMesh-Cons</td><td>6.168</td><td>12.327</td></tr><tr><td>3</td><td>w/o LSelf-Cons</td><td>6.541</td><td>13.623</td></tr><tr><td>4</td><td>full model (P=8)</td><td>6.048</td><td>11.923</td></tr><tr><td>5</td><td>full model (P=16)</td><td>6.098</td><td>11.902</td></tr><tr><td>6</td><td>full model (P=32)</td><td>6.139</td><td>11.912</td></tr></table>
257
+
258
+ drastically when we remove $\mathcal{L}_{\mathrm{Self - Cons}}$ . Moreover, removing $\mathcal{L}_{\mathrm{Mesh - Cons}}$ negatively impacts the results, demonstrating its importance. Moreover, estimating the 3D landmarks in the world space using fewer views leads to better results. This is a significant advantage as it makes our fine-tuning process more efficient.
259
+
260
+ # 6. Conclusion
261
+
262
+ We propose 3D-aware multi-view consistency training, a new framework for improving deep-learning base landmark detection algorithms. Through a set of novel loss functions, we force the network to produce landmarks that are 3D consistent. We additionally introduce a novel dataset simulation pipeline to combine the merits of lab-controlled captures and in-the-wild collected images. The model refined by our method outperforms previous approaches in terms of landmark detection accuracy and head pose estimation accuracy. Admittedly, our work has some limitations. For example, our proposed training relies on the performance of the baseline method. If the pretrained baseline yield poor initial predictions, our DLT would fail to estimate reasonable canonical 3D landmark, affecting the performance of the proposed self-projection consistency loss. Investigating ways to reduce the reliance on the accuracy of the baseline methods would be an interesting future research.
263
+
264
+ # References
265
+
266
+ [1] Vitor Albiero, Xingyu Chen, Xi Yin, Guan Pang, and Tal Hassner. img2pose: Face alignment and detection via 6dof, face pose estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7617-7627, 2021.
267
+ [2] Slawomir Bak, Peter Carr, and Jean-Francois Lalonde. Domain adaptation through synthesis for unsupervised person re-identification. In Proceedings of the European conference on computer vision (ECCV), pages 189–205, 2018.
268
+ [3] Peter N Belhumeur, David W Jacobs, David J Kriegman, and Neeraj Kumar. Localizing parts of faces using a consensus of exemplars. IEEE transactions on pattern analysis and machine intelligence, 35(12):2930-2940, 2013.
269
+ [4] Adrian Bulat and Georgios Tzimiropoulos. Two-stage convolutional part heatmap regression for the 1st 3d face alignment in the wild (3dfaw) challenge. In European Conference on Computer Vision, pages 616-624. Springer, 2016.
270
+ [5] Adrian Bulat and Georgios Tzimiropoulos. Binarized convolutional landmark localizers for human pose estimation and face alignment with limited resources. In Proceedings of the IEEE International Conference on Computer Vision, pages 3706-3714, 2017.
271
+ [6] Adrian Bulat and Georgios Tzimiropoulos. How far are we from solving the 2d & 3d face alignment problem? (and a dataset of 230,000 3d facial landmarks). In Proceedings of the IEEE International Conference on Computer Vision, pages 1021-1030, 2017.
272
+ [7] Xavier P Burgos-Artizzu, Pietro Perona, and Piotr Dólar. Robust face landmark estimation under occlusion. In Proceedings of the IEEE international conference on computer vision, pages 1513-1520, 2013.
273
+ [8] Chen Cao, Yanlin Weng, Stephen Lin, and Kun Zhou. 3d shape regression for real-time facial animation. ACM Transactions on Graphics (TOG), 32(4):1-10, 2013.
274
+ [9] Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J Guibas, Jonathan Tremblay, Sameh Khamis, et al. Efficient geometry-aware 3d generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16123-16133, 2022.
275
+ [10] Dong Chen, Shaoqing Ren, Yichen Wei, Xudong Cao, and Jian Sun. Joint cascade face detection and alignment. In European conference on computer vision, pages 109-122. Springer, 2014.
276
+ [11] Lele Chen, Ross K Maddox, Zhiyao Duan, and Chenliang Xu. Hierarchical cross-modal talking face generation with dynamic pixel-wise loss. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7832-7841, 2019.
277
+ [12] Timothy F Cootes, Gareth J Edwards, and Christopher J Taylor. Active appearance models. In European conference on computer vision, pages 484-498. Springer, 1998.
278
+ [13] Jiankang Deng, Jia Guo, Evangelos Ververas, Irene Kotsia, and Stefanos Zafeiriou. Retinaface: Single-shot multi-level face localisation in the wild. In Proceedings of the IEEE con
279
+
280
+ ference on computer vision and pattern recognition, pages 5203-5212, 2020.
281
+ [14] Yu Deng, Jiaolong Yang, Sicheng Xu, Dong Chen, Yunde Jia, and Xin Tong. Accurate 3d face reconstruction with weakly-supervised learning: From single image to image set. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 0-0, 2019.
282
+ [15] Xuanyi Dong, Yan Yan, Wanli Ouyang, and Yi Yang. Style aggregated network for facial landmark detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, June 2018.
283
+ [16] Xuanyi Dong, Yi Yang, Shih-En Wei, Xinshuo Weng, Yaser Sheikh, and Shouu-I Yu. Supervision by registration and triangulation for landmark detection. IEEE transactions on pattern analysis and machine intelligence, 43(10):3681-3694, 2020.
284
+ [17] Pengfei Dou, Shishir K Shah, and Ioannis A Kakadiaris. End-to-end 3d face reconstruction with deep neural networks. In proceedings of the IEEE conference on computer vision and pattern recognition, pages 5908-5917, 2017.
285
+ [18] Yao Feng, Fan Wu, Xiaohu Shao, Yanfeng Wang, and Xi Zhou. Joint 3d face reconstruction and dense alignment with position map regression network. In Proceedings of the European conference on computer vision (ECCV), pages 534-551, 2018.
286
+ [19] Golnaz Ghiasi and Charless C Fowlkes. Occlusion coherence: Detecting and localizing occluded faces. arXiv preprint arXiv:1506.08347, 2015.
287
+ [20] Ralph Gross, Iain Matthews, Jeffrey Cohn, Takeo Kanade, and Simon Baker. Multi-pie. Image and vision computing, 28(5):807-813, 2010.
288
+ [21] Kuangxiao Gu, Yuqian Zhou, and Thomas Huang. Fnet: Landmark driven fetching and learning network for faithful talking facial animation synthesis. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 10861-10868, 2020.
289
+ [22] Jianzhu Guo, Xiangyu Zhu, and Zhen Lei. 3ddfa. https://github.com/cleardusk/3DDFA, 2018.
290
+ [23] Jianzhu Guo, Xiangyu Zhu, Yang Yang, Fan Yang, Zhen Lei, and Stan Z Li. Towards fast, accurate and stable 3d dense face alignment. In European Conference on Computer Vision, pages 152-168. Springer, 2020.
291
+ [24] Xiaojie Guo, Siyuan Li, Jinke Yu, Jiawan Zhang, Jiayi Ma, Lin Ma, Wei Liu, and Haibin Ling. Pfld: A practical facial landmark detector. arXiv preprint arXiv:1902.10859, 2019.
292
+ [25] Richard Hartley and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003.
293
+ [26] Stefan Hinterstoisser, Vincent Lepetit, Paul Wohlhart, and Kurt Konolige. On pre-trained image features and synthetic images for deep learning. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, pages 0-0, 2018.
294
+ [27] Xinya Ji, Hang Zhou, Kaisiyuan Wang, Qianyi Wu, Wayne Wu, Feng Xu, and Xun Cao. Eamm: One-shot emotional talking face via audio-based emotion-aware motion model. In ACM SIGGRAPH 2022 Conference Proceedings, SIGGRAPH '22, 2022.
295
+
296
+ [28] Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2901–2910, 2017.
297
+ [29] Amin Jourabloo and Xiaoming Liu. Large-pose face alignment via cnn-based dense 3d model fitting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4188-4196, 2016.
298
+ [30] Ira Kemelmacher-Shlizerman and Ronen Basri. 3d face reconstruction from a single image using a single reference face shape. IEEE transactions on pattern analysis and machine intelligence, 33(2):394-405, 2010.
299
+ [31] Davis E. King. Dlib-ml: A machine learning toolkit. Journal of Machine Learning Research, 10:1755-1758, 2009.
300
+ [32] Martin Koestinger, Paul Wohlhart, Peter M Roth, and Horst Bischof. Annotated facial landmarks in the wild: A largescale, real-world database for facial landmark localization. In 2011 IEEE international conference on computer vision workshops (ICCV workshops), pages 2144-2151. IEEE, 2011.
301
+ [33] Tianye Li, Timo Bolkart, Michael. J. Black, Hao Li, and Javier Romero. Learning a model of facial shape and expression from 4D scans. ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 36(6), 2017.
302
+ [34] Tetiana Martyniuk, Orest Kupyn, Yana Kurlyak, Igor Krashenyi, Jiri Matas, and Viktoriya Sharmanska. Dad-3heads: A large-scale dense, accurate and diverse dataset for 3d head alignment from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 20942–20952, 2022.
303
+ [35] Nikolaus Mayer, Eddy Ilg, Philipp Fischer, Caner Hazirbas, Daniel Cremers, Alexey Dosovitskiy, and Thomas Brox. What makes good synthetic training data for learning disparity and optical flow estimation? International Journal of Computer Vision, 126(9):942-960, 2018.
304
+ [36] Kieron Messer, Jiri Matas, Josef Kittler, Juergen Luettin, Gilbert Maitre, et al. Xm2vtsdb: The extended m2vts database. In Second international conference on audio and video-based biometric person authentication, volume 964, pages 965-966. Citeseer, 1999.
305
+ [37] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021.
306
+ [38] Erik Murphy-Chutorian and Mohan Manubhai Trivedi. Head pose estimation in computer vision: A survey. IEEE transactions on pattern analysis and machine intelligence, 31(4):607-626, 2008.
307
+ [39] P Jonathon Phillips, Patrick J Flynn, Todd Scruggs, Kevin W Bowyer, Jin Chang, Kevin Hoffman, Joe Marques, Jaesik Min, and William Worek. Overview of the face recognition grand challenge. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05), volume 1, pages 947-954. IEEE, 2005.
308
+ [40] Shengju Qian, Keqiang Sun, Wayne Wu, Chen Qian, and Jiaya Jia. Aggregation via separation: Boosting facial land
309
+
310
+ mark detector with semi-supervised style translation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10153-10163, 2019.
311
+ [41] Rajeev Ranjan, Vishal M Patel, and Rama Chellappa. Hyperface: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. IEEE transactions on pattern analysis and machine intelligence, 41(1):121-135, 2017.
312
+ [42] Elad Richardson, Matan Sela, Roy Or-El, and Ron Kimmel. Learning detailed face reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1259–1268, 2017.
313
+ [43] Daniel Roich, Ron Mokady, Amit H Bermano, and Daniel Cohen-Or. Pivotal tuning for latent-based editing of real images. ACM Transactions on Graphics (TOG), 42(1):1-13, 2022.
314
+ [44] Andreas Rossler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nießner. Faceforensics: Learning to detect manipulated facial images. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1-11, 2019.
315
+ [45] Nataniel Ruiz, Samuel Schulter, and Manmohan Chandraker. Learning to simulate. In International Conference on Learning Representations, 2019.
316
+ [46] Christos Sagonas, Georgios Tzimiropoulos, Stefanos Zafeiriou, and Maja Pantic. 300 faces in-the-wild challenge: The first facial landmark localization challenge. In Proceedings of the IEEE international conference on computer vision workshops, pages 397-403, 2013.
317
+ [47] Christos Sagonas, Georgios Tzimiropoulos, Stefanos Zafeiriou, and Maja Pantic. 300 faces in-the-wild challenge: The first facial landmark localization challenge. In Proceedings of the IEEE international conference on computer vision workshops, pages 397-403, 2013.
318
+ [48] Christos Sagonas, Georgios Tzimiropoulos, Stefanos Zafeiriou, and Maja Pantic. A semi-automatic methodology for facial landmark annotation. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 896-903, 2013.
319
+ [49] Jie Shen, Stefanos Zafeiriou, Grigoris G Chrysos, Jean Kossaifi, Georgios Tzimiropoulos, and Maja Pantic. The first facial landmark tracking in-the-wild challenge: Benchmark and results. In Proceedings of the IEEE international conference on computer vision workshops, pages 50-58, 2015.
320
+ [50] Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Joshua Susskind, Wenda Wang, and Russell Webb. Learning from simulated and unsupervised images through adversarial training. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2107-2116, 2017.
321
+ [51] Linsen Song, Wayne Wu, Chaoyou Fu, Chen Change Loy, and Ran He. Audio-driven dubbing for user generated contents via style-aware semi-parametric synthesis. IEEE Transactions on Circuits and Systems for Video Technology, 2022.
322
+ [52] Linsen Song, Wayne Wu, Chen Qian, Ran He, and Chen Change Loy. Everybody's talkin': Let me talk as you want. IEEE Transactions on Information Forensics and Security, 17:585-598, 2022.
323
+
324
+ [53] Yang Song, Jingwen Zhu, Dawei Li, Andy Wang, and Hairong Qi. Talking face generation by conditional recurrent adversarial network. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 919–925. International Joint Conferences on Artificial Intelligence Organization, 7 2019.
325
+ [54] Luuk Spreeuwers, Maikel Schils, and Raymond Veldhuis. Towards robust evaluation of face morphing detection. In 2018 26th European Signal Processing Conference (EU-SIPCO), pages 1027-1031. IEEE, 2018.
326
+ [55] Keqiang Sun, Wayne Wu, Tinghao Liu, Shuo Yang, Quan Wang, Qiang Zhou, Zuochang Ye, and Chen Qian. Fab: A robust facial landmark detection framework for motion-blurred videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5462-5471, 2019.
327
+ [56] Yi Sun, Xiaogang Wang, and Xiaou Tang. Deep convolutional network cascade for facial point detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3476-3483, 2013.
328
+ [57] Ayush Tewari, Mohamed Elgharib, Gaurav Bharaj, Florian Bernard, Hans-Peter Seidel, Patrick Pérez, Michael Zollhofer, and Christian Theobalt. Stylerig: Rigging stylegan for 3d control over portrait images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6142-6151, 2020.
329
+ [58] Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. Face2face: Real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2387-2395, 2016.
330
+ [59] Boris van Breugel, Trent Kyono, Jeroen Berrevoets, and Michaela van der Schaar. Decaf: Generating fair synthetic data using causally-aware generative networks. Advances in Neural Information Processing Systems, 34:22221-22233, 2021.
331
+ [60] Ting-Chun Wang, Ming-Yu Liu, Andrew Tao, Guilin Liu, Jan Kautz, and Bryan Catanzaro. Few-shot video-to-video synthesis. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
332
+ [61] Erroll Wood, Tadas Baltrusaitis, Charlie Hewitt, Sebastian Dziadzio, Thomas J Cashman, and Jamie Shotton. Fake it till you make it: face analysis in the wild using synthetic data alone. In Proceedings of the IEEE/CVF international conference on computer vision, pages 3681-3691, 2021.
333
+ [62] Erroll Wood, Tadas Baltrusaitis, Louis-Philippe Morency, Peter Robinson, and Andreas Bulling. Learning an appearance-based gaze estimator from one million synthesised images. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, pages 131–138, 2016.
334
+ [63] Yue Wu, Zuoguan Wang, and Qiang Ji. Facial feature tracking under varying facial expressions and face poses based on restricted boltzmann machines. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3452-3459, 2013.
335
+ [64] Cheng-hsin Wu, Ningyuan Zheng, Scott Ardisson, Rohan Bali, Danielle Belko, Eric Brockmeyer, Lucas Evans, Timothy Godisart, Hyowon Ha, Alexander Hypes, Taylor Koska,
336
+
337
+ Steven Krenn, Stephen Lombardi, Xiaomin Luo, Kevyn McPhail, Laura Millerschoen, Michal Perdoch, Mark Pitts, Alexander Richard, Jason Saragih, Junko Saragih, Takaaki Shiratori, Tomas Simon, Matt Stewart, Autumn Trimble, Xinshuo Weng, David Whitewolf, Chenglei Wu, Shouu-I Yu, and Yaser Sheikh. Multiface: A dataset for neural face rendering. In arXiv, 2022.
338
+ [65] Haotian Yang, Hao Zhu, Yanru Wang, Mingkai Huang, Qiu Shen, Ruigang Yang, and Xun Cao. Facescape: a large-scale high quality 3d face dataset and detailed riggable 3d face prediction. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 601-610, 2020.
339
+ [66] Ran Yi, Yong-Jin Liu, Yu-Kun Lai, and Paul L Rosin. Apdrawinggan: Generating artistic portrait drawings from face photos with hierarchical gans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10743-10752, 2019.
340
+ [67] Ran Yi, Zipeng Ye, Ruoyu Fan, Yezhi Shu, Yong-Jin Liu, Yu-Kun Lai, and Paul L Rosin. Animating portrait line drawings from a single face photo and a speech signal. In ACM SIGGRAPH 2022 Conference Proceedings, pages 1-8, 2022.
341
+ [68] Xi Yin, Xiang Yu, Kihyuk Sohn, Xiaoming Liu, and Manmohan Chandraker. Towards large-posed face frontalization in the wild. In In Proceeding of International Conference on Computer Vision, Venice, Italy, October 2017.
342
+ [69] Egor Zakharov, Aliaksandra Shysheya, Egor Burkov, and Victor Lempitsky. Few-shot adversarial learning of realistic neural talking head models. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9459-9468, 2019.
343
+ [70] Xiaoxing Zeng, Xiaojiang Peng, and Yu Qiao. Df2net: A dense-fine-finer network for detailed 3d face reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2315-2324, 2019.
344
+ [71] Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE signal processing letters, 23(10):1499-1503, 2016.
345
+ [72] Xing Zhang, Lijun Yin, Jeffrey F Cohn, Shaun Canavan, Michael Reale, Andy Horowitz, and Peng Liu. A high-resolution spontaneous 3d dynamic facial expression database. In 2013 10th IEEE international conference and workshops on automatic face and gesture recognition (FG), pages 1-6. IEEE, 2013.
346
+ [73] Zhimeng Zhang, Lincheng Li, Yu Ding, and Changjie Fan. Flow-guided one-shot talking face generation with a high-resolution audio-visual dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3661–3670, 2021.
347
+ [74] Zhanpeng Zhang, Ping Luo, Chen Change Loy, and Xiaou Tang. Facial landmark detection by deep multi-task learning. In European conference on computer vision, pages 94-108. Springer, 2014.
348
+ [75] Zhanpeng Zhang, Ping Luo, Chen Change Loy, and Xiaou Tang. Learning deep representation for face alignment with auxiliary attributes. IEEE transactions on pattern analysis and machine intelligence, 38(5):918-930, 2015.
349
+
350
+ [76] Aihua Zheng, Feixia Zhu, Hao Zhu, Mandi Luo, and Ran He. Talking face generation via learning semantic and temporal synchronous landmarks. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 3682-3689. IEEE, 2021.
351
+ [77] Erjin Zhou, Haoqiang Fan, Zhimin Cao, Yuning Jiang, and Qi Yin. Extensive facial landmark localization with coarse-to-fine convolutional network cascade. In Proceedings of the IEEE international conference on computer vision workshops, pages 386-391, 2013.
352
+ [78] Hang Zhou, Yasheng Sun, Wayne Wu, Chen Change Loy, Xiaogang Wang, and Ziwei Liu. Pose-controllable talking face generation by implicitly modularized audio-visual representation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4176-4186, 2021.
353
+ [79] Yang Zhou, Xintong Han, Eli Shechtman, Jose Echevarria, Evangelos Kalogerakis, and Dingzeyu Li. Makelttalk: speaker-aware talking-head animation. ACM Transactions on Graphics (TOG), 39(6):1-15, 2020.
354
+ [80] Jiapeng Zhu, Yujun Shen, Deli Zhao, and Bolei Zhou. Indomain gan inversion for real image editing. In European conference on computer vision, pages 592-608. Springer, 2020.
355
+ [81] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223-2232, 2017.
356
+ [82] Xiangyu Zhu, Zhen Lei, Xiaoming Liu, Hailin Shi, and Stan Z Li. Face alignment across large poses: A 3d solution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 146-155, 2016.
357
+ [83] Xiangyu Zhu, Zhen Lei, Xiaoming Liu, Hailin Shi, and Stan Z Li. Face alignment across large poses: A 3d solution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 146-155, 2016.
358
+ [84] Xiangyu Zhu, Xiaoming Liu, Zhen Lei, and Stan Z Li. Face alignment in full pose range: A 3d total solution. IEEE transactions on pattern analysis and machine intelligence, 41(1):78-92, 2017.
359
+ [85] Xiangxin Zhu and Deva Ramanan. Face detection, pose estimation, and landmark localization in the wild. In 2012 IEEE conference on computer vision and pattern recognition, pages 2879-2886. IEEE, 2012.
3dawarefaciallandmarkdetectionviamultiviewconsistenttrainingonsyntheticdata/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:279af71e7a95601d8c535e8cb7b076c52780411a0b79f0fbe46aba45c4b17f34
3
+ size 532250
3dawarefaciallandmarkdetectionviamultiviewconsistenttrainingonsyntheticdata/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65298a97f3c21f21498198a8f4e488f060866e96ad2c5a2338066b198c7d495d
3
+ size 468226
3dawaremulticlassimagetoimagetranslationwithnerfs/38da797f-7f59-48cd-af34-af72487f73d0_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a5fb0ab2a1201d0e916c2f9130564dbb043fa1e9190a1a6bef2875a898f2de3
3
+ size 72092
3dawaremulticlassimagetoimagetranslationwithnerfs/38da797f-7f59-48cd-af34-af72487f73d0_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:349d1f068c931c464e4df0b9b29ed9a3f63f62a7d8d7265aea213422f95e4d35
3
+ size 94539
3dawaremulticlassimagetoimagetranslationwithnerfs/38da797f-7f59-48cd-af34-af72487f73d0_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f89fbca22ee3704d5ae325dd695ffbd4babf1948b76d878ca1bb759128b4597
3
+ size 2580578
3dawaremulticlassimagetoimagetranslationwithnerfs/full.md ADDED
@@ -0,0 +1,301 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3D-Aware Multi-Class Image-to-Image Translation with NeRFs
2
+
3
+ Senmao Li $^{1}$ Joost van de Weijer $^{2}$ Yaxing Wang $^{1*}$
4
+
5
+ Fahad Shahbaz Khan $^{3,4}$ Meiqin Liu $^{5}$ Jian Yang $^{1}$
6
+
7
+ $^{1}$ VCIP,CS, Nankai University, $^{2}$ Universitat Autònoma de Barcelona
8
+
9
+ $^{3}$ Mohamed bin Zayed University of AI, $^{4}$ Linkoping University, $^{5}$ Beijing Jiaotong University
10
+
11
+ senmaonk@gmail.com {yaxing,csjyang}@nankai.edu.cn joost@cvc.uab.es
12
+
13
+ fahad.khan@liu.se mqliu@bjtu.edu.cn
14
+
15
+ ![](images/22fc17fea1f2d332d96b8f08d9e486c59a1987425629a67389d12cce8b964b1e.jpg)
16
+ Figure 1. 3D-aware I2I translation: given a view-consistent 3D scene (the input), our method maps it into a high-quality target-specific image. Our approach produces consistent results across viewpoints.
17
+
18
+ # Abstract
19
+
20
+ Recent advances in 3D-aware generative models (3D-aware GANs) combined with Neural Radiance Fields (NeRF) have achieved impressive results. However no prior works investigate 3D-aware GANs for 3D consistent multiclass image-to-image (3D-aware I2I) translation. Naively using 2D-I2I translation methods suffers from unrealistic shape/identity change. To perform 3D-aware multi-class I2I translation, we decouple this learning process into a multi-class 3D-aware GAN step and a 3D-aware I2I trans
21
+
22
+ lation step. In the first step, we propose two novel techniques: a new conditional architecture and an effective training strategy. In the second step, based on the well-trained multi-class 3D-aware GAN architecture, that preserves view-consistency, we construct a 3D-aware I2I translation system. To further reduce the view-consistency problems, we propose several new techniques, including a U-net-like adaptor network design, a hierarchical representation constrain and a relative regularization loss. In extensive experiments on two datasets, quantitative and qualitative results demonstrate that we successfully perform 3D-aware I2I translation with multi-view consistency. Code is
23
+
24
+ # 1. Introduction
25
+
26
+ Neural Radiance Fields (NeRF) have increasingly gained attention with their outstanding capacity to synthesize high-quality view-consistent images [31,39,66]. Benefiting from the adversarial mechanism [11], StyleNeRF [12] and concurrent works [4, 8, 44, 69] have successfully synthesized high-quality view-consistent, detailed 3D scenes by combining NeRF with StyleGAN-like generator design [22]. This recent progress in 3D-aware image synthesis has not yet been extended to 3D-aware I2I translation, where the aim is to translate in a 3D-consistent manner from a source scene to a target scene of another class (see Figure 1).
27
+
28
+ A naive strategy is to use well-designed 2D-I2I translation methods [15, 16, 26, 28, 46, 63, 65, 70]. These methods, however, suffer from unrealistic shape/identity changes when changing the viewpoint, which are especially notable when looking at a video. Main target class characteristics, such as hairs, ears, and noses, are not geometrically realistic, leading to unrealistic results which are especially disturbing when applying I2I to translate videos. Also, these methods typically underestimate the viewpoint change and result in target videos with less viewpoint change than the source video. Another direction is to apply video-to-video synthesis methods [2, 3, 6, 30, 53]. These approaches, however, either rely heavily on labeled data or multi-view frames for each object. In this work, we assume that we only have access to single-view RGB data.
29
+
30
+ To perform 3D-aware I2I translation, we extend the theory developed for 2D-I2I with recent developments in 3D-aware image synthesis. We decouple the learning process into a multi-class 3D-aware generative model step and a 3D-aware I2I translation step. The former can synthesize view-consistent 3D scenes given a scene label, thereby addressing the 3D inconsistency problems we discussed for 2D-I2I. We will use this 3D-aware generative model to initialize our 3D-aware I2I model. It therefore inherits the capacity of synthesizing 3D consistent images. To train effectively a multi-class 3D-aware generative model (see Figure 2(b)), we provide a new training strategy consisting of: (1) training an unconditional 3D-aware generative model (i.e., StyleNeRF) and (2) partially initializing the multiclass 3D-aware generative model (i.e., multi-class StyleNeRF) with the weights learned from StyleNeRF. In the 3D-aware I2I translation step, we design a 3D-aware I2I translation architecture (Figure 2(f)) adapted from the trained multi-class StyleNeRF network. To be specific, we use the main network of the pretrained discriminator (Figure 2(b)) to initialize the encoder $E$ of the 3D-aware I2I translation model (Figure 2(f)), and correspondingly, the pretrained generator (Figure 2(b)) to initialize the 3D-aware I2I gen
31
+
32
+ erator (Figure 2(f)). This initialization inherits the capacity of being sensitive to the view information.
33
+
34
+ Directly using the constructed 3D-aware I2I translation model (Figure 2(f)), there still exists some view-consistency problem. This is because of the lack of multi-view consistency regularization, and the usage of the single-view image. Therefore, to address these problems we introduce several techniques, including a U-net-like adaptor network design, a hierarchical representation constrain and a relative regularization loss.
35
+
36
+ In sum, our work makes the following contributions:
37
+
38
+ - We are the first to explore 3D-aware multi-class I2I translation, which allows generating 3D consistent videos.
39
+ - We decouple 3D-aware I2I translation into two steps. First, we propose a multi-class StyleNeRF. To train this multi-class StyleNeRF effectively, we provide a new training strategy. The second step is the proposal of a 3D-aware I2I translation architecture.
40
+ - To further address the view-inconsistency problem of 3D-aware I2I translation, we propose several techniques: a U-net-like adaptor, a hierarchical representation constraint and a relative regularization loss.
41
+ - On extensive experiments, we considerably outperform existing 2D-I2I systems with our 3D-aware I2I method when evaluating temporal consistency.
42
+
43
+ # 2. Related Works
44
+
45
+ Neural Implicit Fields. Using neural implicit fields to represent 3D scenes has shown unprecedented quality. [37, 38, 43, 45, 48, 51] use 3D supervision to predict neural implicit fields. Recently, NeRF has shown powerful performance to neural implicit representations. NeRF and its variants [31, 39, 66] utilize a volume rendering technique for reconstructing a 3D scene as a combination of neural radiance and density fields to synthesize novel views.
46
+
47
+ 3D-aware GANs Recent approaches [5, 9, 13, 19, 35, 40-42, 52, 62, 68] learn neural implicit representations without 3D or multi-view supervisions. Combined with the adversarial loss, these methods typically randomly sample viewpoints, render photorealistic 2D images, and finally optimize their 3D representations. StyleNeRF [12] and concurrent works [4,8,44,69] have successfully synthesized high-quality view-consistent, detailed 3D scenes with StyleGAN-like generator design [22]. In this paper, we investigate 3D-aware image-to-image (3D-aware I2I) translation, where the aim is to translate in a 3D-consistent manner from a source scene to a target scene of another class. We combine transfer learning of GANs [55, 60].
48
+
49
+ I2I translation. I2I translation with GAN [16, 57, 59, 61] has increasingly gained attention in computer vision. Based
50
+
51
+ on the differences of the I2I translation task, recent works focus on paired I2I translation [10, 16, 71], unpaired I2I translation [1, 18, 24, 27, 32, 36, 46, 50, 56, 58, 63, 64, 70], diverse I2I translation [24, 32, 36, 46, 64, 70] and scalable I2I translation [7, 29, 65]. However, none of these approaches addresses the problem of 3D-aware I2I. For the 3D scenes represented by neural implicit fields, directly using these methods suffers from view-inconsistency.
52
+
53
+ # 3. Method
54
+
55
+ Problem setting. Our goal is to achieve 3D consistent multi-class I2I translation trained on single-view data only. The system is designed to translate a viewpoint-video consisting of multiple images (source domain) into a new, photorealistic viewpoint-video scene of a target class. Furthermore, the system should be able to handle multi-class target domains. We decouple our learning into a multi-class 3D-aware generative model step and a multi-class 3D-aware I2I translation step.
56
+
57
+ # 3.1. Multi-class 3D-aware generative model
58
+
59
+ Let $\mathcal{I}_{\mathcal{R}\mathcal{G}\mathcal{B}} \in \mathbb{R}^{H \times W \times 3}$ be in the image domain. In this work, we aim to map a source image into a target sample conditioned on the target domain label $l \in \{1, \dots, L\}$ and a random noise vector $\mathbf{z} \in \mathbb{R}^{\mathbf{Z}}$ . Let vector $\mathbf{x}$ and $\mathbf{d}$ be 3D location and 2D viewing direction, respectively.
60
+
61
+ Unconditional 3D-aware generative model. StyleNeRF [12] introduces a 5D function (3D location $x$ and 2D viewing direction $d$ ) to predict the volume density $\sigma$ and RGB color $c$ . Both $\sigma$ and $c$ are further used to render an image. As shown on Figure 2(a) StyleNeRF consists of four subnetworks: a mapping network $M$ , a fully connected layer $F$ , a generator $G$ and a discriminator $D$ . The mapping network $M$ takes random noise $z$ as input, and outputs latent code $w$ , which is further fed into both the fully connected layer $F$ and generator $G$ . Given the 3D location $x$ , the 2D viewing direction $d$ and latent code $w$ , StyleNeRF renders the feature map $f$ :
62
+
63
+ $$
64
+ \boldsymbol {f} (\boldsymbol {r}) = \int_ {0} ^ {\infty} p (t) \boldsymbol {c} (\boldsymbol {r} (t), \boldsymbol {d}) d t
65
+ $$
66
+
67
+ $$
68
+ p (t) = \exp \left(- \int_ {0} ^ {t} \sigma (\boldsymbol {r} (s)) d s\right) \cdot \sigma_ {\boldsymbol {w}} (\boldsymbol {r} (t)) \tag {1}
69
+ $$
70
+
71
+ $$
72
+ \boldsymbol {c}, \sigma = F (\boldsymbol {x}, \boldsymbol {d}, \boldsymbol {w}),
73
+ $$
74
+
75
+ where $\boldsymbol{r}(t) = \boldsymbol{o} + t\boldsymbol{d}$ ( $\boldsymbol{o}$ is the camera origin) is a camera ray for each feature representation position. Generator $G$ takes as an input the representation $\boldsymbol{f}$ and the latent code $\boldsymbol{w}$ , and outputs view-consistent photo-realistic novel result $\hat{I}_{RGB}$ . The discriminator $D$ is to distinguish real images $I_{RGB}$ from generated images $\hat{I}_{RGB}$ .
76
+
77
+ The fully objective of StyleNeRF is as following:
78
+
79
+ $$
80
+ \begin{array}{l} \mathcal {L} _ {G} = \mathbb {E} _ {\boldsymbol {z} \sim \mathcal {Z}, \boldsymbol {p} \sim \mathcal {P}} [ v (D (G (F (\boldsymbol {z}, \boldsymbol {x}, \boldsymbol {d}), M (\boldsymbol {z}))) ] \\ + \mathbb {E} _ {I _ {R G B} \sim p _ {\mathrm {d a t a}}} \left[ v (- D (I _ {R G B}) + \lambda \| \nabla D (I _ {R G B}) \| ^ {2}) \right] \tag {2} \\ + \beta \cdot \mathcal {L} _ {\mathrm {N e R F - p a t h}} \\ \end{array}
81
+ $$
82
+
83
+ where $v(u) = -\log (1 + \exp (-u))$ , and $p_{\mathrm{data}}$ is the data distribution. $\mathcal{L}_{\mathrm{NeRF - path}}$ is NeRF path regularization used in StyleNeRF. We also set $\beta = 0.2$ and $\lambda = 0.5$ following StyleNeRF.
84
+
85
+ Conditional 3D-aware generative model. Figure 2(b) shows the proposed multi-class 3D-aware generative model (i.e., multi-class StyleNeRF). Compared to the StyleNeRF architecture (Figure 2(a)), we introduce two mapping networks: $M_{1}$ and $M_{2}$ . The mapping network $M_{1}$ outputs the latent code $\boldsymbol{w}_{1}$ . While the mapping network $M_{2}$ takes as input the concatenated noise $\boldsymbol{z}$ and class embedding $e_{l-th}$ , and outputs the latent code $\boldsymbol{w}_{2}$ . The second mapping network $M_{2}$ aims to guide the generator $G$ to synthesize a class-specific image. Here we do not feed the latent code $\boldsymbol{w}_{2}$ into NeRF's fully connected layer $F$ , since we expect $F$ to learn a class-agnostic feature representation, which contributes to perform multi-class 3D-aware I2I translation.
86
+
87
+ To be able to train multi-class StyleNeRF we adapt the loss function. We require $D$ to address multiple adversarial classification tasks simultaneously, as in [33]. Specifically, given output $D \in \mathbb{R}^L$ , we locate the $l$ -th class response. Using the response for the $l$ -th class, we compute the adversarial loss and back-propagate gradients:
88
+
89
+ $$
90
+ \begin{array}{l} \mathcal {L} _ {G} ^ {l} = \mathbb {E} _ {\boldsymbol {z} \sim \mathcal {Z}, \boldsymbol {x} \sim \mathcal {P} _ {x}, \boldsymbol {d} \sim \mathcal {P} _ {d}} \left[ v (D (G (\hat {I} _ {R G B})) _ {\boldsymbol {l} - t h} \right] \\ + \mathbb {E} _ {I _ {R G B} \sim p _ {\mathrm {d a t a}}} \left[ v (- D (I _ {R G B}) _ {l - t h} + \lambda \| \nabla D (I _ {R G B}) _ {l _ {t h}} \| ^ {2}) \right] \\ + \beta \cdot \mathcal {L} _ {\text {N e R F - p a t h}}. \tag {3} \\ \end{array}
91
+ $$
92
+
93
+ We initialize the multi-class StyleNeRF with the weights learned with the unconditional StyleNeRF (E.q. 2), since the training from scratch fails to convergence. Results of this are show in Figs. 7. To be specific, we directly copy the weights from the one learned from StyleNeRF for $M_{1}$ , $F$ and $G$ with the same parameter size. For the mapping network $M_{2}$ , we duplicate the weight from $M$ except for the first layer, which is trained from scratch because of the different parameter sizes. The discriminator is similarly initialized except for the last layer, which is a new convolution layer with $L$ output channels. Using the proposed initialization method, we successfully generate class-specific photorealistic high-resolution result.
94
+
95
+ # 3.2. 3D-aware I2I translation
96
+
97
+ Figure 2 (f) shows the 3D-aware I2I translation network at inference time. It consists of the encoder $E$ , the generator $G$ and two mapping networks $M_{1}$ and $M_{2}$ . Inspired
98
+
99
+ ![](images/be837d796fb2659dd32c9f9068e8e2b55d4d30650a14b68ad0f939ad6e8997d9.jpg)
100
+ Figure 2. Overview of our method. (a) We first train a 3D-aware generative mode (i.e., StyleNeRF) with single-view photos. (b) We extend StyleNerf to multi-class StyleNerf. We introduce an effective training strategy: initializing multi-class StyleNeRF with StyleNeRF. (c) The training of the proposed 3D-aware I2I translation. It consists of the encoder $E$ , the adaptor $A$ , the generator $G$ and two mapping networks $M_1$ and $M_2$ . We freeze all networks except for training the adaptor $A$ . The encoder is initialized by the main networks of the pretrained discriminator. We introduce several techniques to address the view-consistency problems: including a U-net-like adaptor $A$ , (d) relative regularization loss and (e) hierarchical representation constrain. (f) Usage of proposed model at inference time.
101
+
102
+ by DeepI2I [61], we use the pretrained discriminator (Figure 2(b)) to initialize the encoder $E$ of the 3D-aware I2I translation model (Figure 2(f)), and correspondingly, the pretrained generator (Figure 2(b)) to initialize the 3D-aware I2I generator. To align the encoder with the generator, [61] introduces a Resnet-like adaptor network to communicate the encoder and decoder. The adaptor is trained without any real data. However, directly using these techniques for 3D-aware I2I translation still suffers from some view-consistency problems. Therefore, in the following, we introduce several designs to address this problem: a U-net-like adaptor network design, a hierarchical representation constrain and a relative regularization loss.
103
+
104
+ U-net-like adaptor. As shown in Figure 2(c), to overcome 3D-inconsistency in the results, we propose a U-net-like adaptor $A$ . This design contributes to preserve the spatial structure of the input feature. This has been used before for semantic segmentation tasks and label to image translation [17]. In this paper, we experimentally demonstrate that the U-net-like adaptor is effective to reduce the inconsistency.
105
+
106
+ Hierarchical representation constrain. As shown in Figure 2(e), given the noise $\mathbf{z}$ , 3D location $\mathbf{x}$ and 2D viewing direction $\mathbf{d}$ the fully connected layer $F$ renders the 3D-consistent feature map $\mathbf{f} = F(\mathbf{x}, \mathbf{d}, \mathbf{w}_1) = F(\mathbf{x}, \mathbf{d}, M1(\mathbf{z}))$ . We further extract the hierarchical representation $\{G(\mathbf{f}, \mathbf{w}_1, \mathbf{w}_2)_k\}$ as well as the synthesized image $\hat{I}_{RGB} = G(\mathbf{f}, \mathbf{w}_1, \mathbf{w}_2)$ . Here $G(\mathbf{f}, \mathbf{w}_1, \mathbf{w}_2)_k$ is the
107
+
108
+ $k$ -th $(k = m, \dots, n, (n > m))$ ResBlock $^1$ output of the generator $G$ . We then take the generated image $\hat{I}_{RGB}$ as input for the encoder $E$ : $E(\hat{I}_{RGB})$ , which is fed into the adaptor network $A$ , that is $\hat{\pmb{f}} = A(E(\hat{I}_{RGB}))$ . In this step, our loss is
109
+
110
+ $$
111
+ \mathcal {L} _ {A} = \left\| \boldsymbol {f} - \hat {\boldsymbol {f}} \right\| _ {1}. \tag {4}
112
+ $$
113
+
114
+ For the intermediate layers, we propose a hierarchical representation constrain. Given the output $\hat{\pmb{f}}$ and the latent codes (i.e., $\pmb{w}_1$ and $\pmb{w}_2$ )², we similarly collect the hierarchical feature $\left\{G(\hat{\pmb{f}}, \pmb{w}_1, \pmb{w}_2)_k\right\}$ . The objective is
115
+
116
+ $$
117
+ \mathcal {L} _ {H} = \sum_ {k} \left\| G (\boldsymbol {f}, \boldsymbol {w} _ {1}, \boldsymbol {w} _ {2}) _ {k} - G (\hat {\boldsymbol {f}}, \boldsymbol {w} _ {1}, \boldsymbol {w} _ {2}) _ {k} \right\| _ {1}. \tag {5}
118
+ $$
119
+
120
+ In this step, we freeze every network except for the U-net-like adaptor which is learned. Note that we do not access to any real data to train the adaptor, since we utilize the generated image with from the trained generator (Figure 2(b)).
121
+
122
+ Relative regularization loss. We expect to input the consistency of the translated 3D scene with single-image regularization instead of the images from the consecutive views. We propose a relative regularization loss based on neighboring patches. We assume that neighboring patches
123
+
124
+ <table><tr><td rowspan="2">Dataset Method</td><td colspan="2">CelebA-HQ</td><td colspan="2">AFHQ</td></tr><tr><td>TC↓</td><td>FID↓</td><td>TC↓</td><td>FID↓</td></tr><tr><td>*MUNIT</td><td>30.240</td><td>31.4</td><td>28.497</td><td>41.5</td></tr><tr><td>*DRIT</td><td>35.452</td><td>52.1</td><td>25.341</td><td>95.6</td></tr><tr><td>*MSGAN</td><td>31.641</td><td>33.1</td><td>34.236</td><td>61.4</td></tr><tr><td>StarGANv2</td><td>10.250</td><td>13.6</td><td>3.025</td><td>16.1</td></tr><tr><td>Ours (3D)</td><td>3.743</td><td>22.3</td><td>2.067</td><td>15.3</td></tr><tr><td></td><td>TC↓</td><td>(unc)FID↓</td><td>TC↓</td><td>(unc)FID↓</td></tr><tr><td>†Liu et al. [34]</td><td>13.315</td><td>17.8</td><td>3.462</td><td>20.0</td></tr><tr><td>StarGANv2</td><td>10.250</td><td>12.2</td><td>3.025</td><td>9.9</td></tr><tr><td>†Kunhee et al. [23]</td><td>10.462</td><td>6.7</td><td>3.241</td><td>10.0</td></tr><tr><td>Ours (3D)</td><td>3.743</td><td>18.7</td><td>2.067</td><td>11.4</td></tr></table>
125
+
126
+ are equivalent to that on corresponding patches of two consecutive views. For example, when inputting multi-view consistent scene images, the position of eyes are consistently moving. The fully connected layers (i.e., NeRF mode) $F$ renders the view-consistent feature map $f$ , which finally decides the view-consistent reconstructed 3D scene. Thus, we expect the output $\hat{f}$ of the adaptor $A$ to obtain the view-consistent property of the feature map $f$ .
127
+
128
+ We randomly sample one vector from the feature map $\pmb{f}$ (e.g., red square in (Figure 2(d))), denoted as $\pmb{f}^{\eta}$ . Then we sample the eight nearest neighboring vectors of $\pmb{f}^{\eta}$ (dark green square in Figure 2(d)), denoted by $\pmb{f}^{\eta, \varepsilon}$ where $\varepsilon = 1, \dots, 8$ is the neighbor index. Similarly, we sample vectors $\hat{\pmb{f}}^{\eta}$ and $\hat{\pmb{f}}^{\eta, \varepsilon}$ from the feature map $\hat{\pmb{f}}$ (red and dark green dash square in Figure 2(d)). We then compute the patch difference:
129
+
130
+ $$
131
+ d _ {\boldsymbol {f}} ^ {\eta , \varepsilon} = \boldsymbol {f} ^ {\eta} \ominus \boldsymbol {f} ^ {\eta , \varepsilon}, d _ {\hat {\boldsymbol {f}}} ^ {\eta , \varepsilon} = \hat {\boldsymbol {f}} ^ {\eta} \ominus \hat {\boldsymbol {f}} ^ {\eta , \varepsilon}, \tag {6}
132
+ $$
133
+
134
+ where $\ominus$ represents vector subtraction. In order to preserve the consistency, we force these patch differences to be small:
135
+
136
+ $$
137
+ \mathcal {L} _ {R} = \left\| d _ {\boldsymbol {f}} ^ {\eta , \varepsilon} - d _ {\hat {\boldsymbol {f}}} ^ {\eta , \varepsilon} \right\| _ {1}. \tag {7}
138
+ $$
139
+
140
+ The underlying intuition is straightforward: the difference vectors of the same location should be most relevant in the latent space compared to other random pairs.
141
+
142
+ The final objective is
143
+
144
+ $$
145
+ \mathcal {L} = \mathcal {L} _ {H} + \mathcal {L} _ {A} + \mathcal {L} _ {R}. \tag {8}
146
+ $$
147
+
148
+ # 4. Experiments
149
+
150
+ # 4.1. Experimental setup
151
+
152
+ Training details. We use the trained StyleNeRF to partially initialize our multi-class StyleNeRF architecture. We adapt the structure of the multi-class StyleNeRF to the 3D-aware I2I architecture. The proposed method is implemented in Pytorch [47]. We use Adam [25] with a batch size
153
+
154
+ Table 1. Comparison with baselines on TC and FID metrics.* denotes that we used the results provided by StarGANv2. † means that we used the pre-trained networks provided by authors.
155
+
156
+ <table><tr><td>Ini.</td><td>Ada.</td><td>Hrc.</td><td>Rrl.</td><td>TC↓</td><td>FID↓</td></tr><tr><td>Y</td><td>N</td><td>N</td><td>N</td><td>2.612</td><td>23.8</td></tr><tr><td>Y</td><td>Y</td><td>N</td><td>N</td><td>2.324</td><td>23.1</td></tr><tr><td>Y</td><td>Y</td><td>Y</td><td>N</td><td>2.204</td><td>16.1</td></tr><tr><td>Y</td><td>Y</td><td>Y</td><td>Y</td><td>2.067</td><td>15.3</td></tr></table>
157
+
158
+ Table 2. Impact of several components in the performance on AFHQ. The second row is the case where the 3D-aware I2I translation model is initialized by weights learned from the multi-class StylyNeRF. Then it is trained with a Resnet-based adaptor and $L_{1}$ loss between the representations $f$ and $\hat{f}$ . The proposed techniques continuously improve the consistency and performance. Ini.: initialization method for multi-class StyleNeRF, Ada.: U-net-like adaptor, Hrc.: Hierarchical representation constrain, Rrl: Relative regularization loss.
159
+
160
+ ![](images/eb275f460a0b3cfa2c4f46a236179adafb7a047d7edcab94d40fe236008a81fe.jpg)
161
+
162
+ ![](images/5352e8612629871db006d9a0389bb05f6f4aa4bd3b02af7470e50f420b899545.jpg)
163
+ Figure 3. (Top) Using a single mapping network which takes as input the concatenated class embedding and the noise. We find it fails to generate target-specific realistic image. (Bottom) we use two mapping networks without concatenating their outputs like the proposed method. This design fails to generate 3D-aware results.
164
+
165
+ of 64, using a learning rate of 0.0002. We use $2 \times$ Quadro RTX 3090 GPUs (24 GB VRAM) to conduct all our experiments. We show the network details and more results on Supp. Mat..
166
+
167
+ Datasets. Our experiments are conducted on the Animal Faces (AFHQ) [7] and CelebA-HQ [21] datasets. AFHQ contains 3 classes, each one has about 5000 images. In CelebA-HQ, we use gender as a class, with $\sim 10\mathrm{k}(10057)$ male and $\sim 18\mathrm{k}(17943)$ female images in the training set. In this paper, all images are resized to $256 \times 256$ .
168
+
169
+ ![](images/362609a517965255f3860b4570bf070f137aeef444948dbcba8435065046331a.jpg)
170
+ Figure 4. Comparative results between the proposed method and StarGANv2. We observe that StarGANv2 suffers from underestimating viewpoint changes when changing the input viewpoint (first column). It also leads to identity change (third and fourth columns), and a geometrically unrealistic ear (last two columns).
171
+
172
+ ![](images/d7188db05ea4a7091eb376ac596223270013f449c014d143d1237e2075be336d.jpg)
173
+ Figure 5. The generated images of (top) $G(\pmb{f}, \pmb{w}_1, \pmb{w}_2)$ and (bottom) $G(\hat{\pmb{f}}, \pmb{w}_1, \pmb{w}_2)$ , which show that we correctly align the outputs of both the NeRF mode $F$ and the adaptor $A$ .
174
+
175
+ Baselines. We compare to MUNIT [15], DRIT [28], MSGAN [20], StarGANv2 [7], [23] and [34], all of which perform image-to-image translation.
176
+
177
+ Evaluation Measures. We employ the widely used metric for evaluation, namely Fréchet Inception Distance (FID) [14]. We also propose a new measure in which we
178
+
179
+ combine two metrics, one which measures the consistency between neighboring frames (which we want to be low), and another that measures the diversity over the whole video (which we would like to be high). We adopt a modified temporal loss (TL) [54]. This temporal loss computes the Frobenius difference between two frames to evaluate the video consistency. Only considering this measure would lead to high scores when neighboring frames in the generated video are all the same. For successful 3D-aware I2I translation, we expect the system to be sensitive to view changes in the source video and therefore combine low consecutive frame changes with high diversity over the video. Therefore, we propose to compute LPIPS [67] for each video (vLPIPS), which indicates the diversity of the generated video sequence. To evaluate both the consistency and the sensitiveness of the generated video, we propose a new temporal consistency metric (TC):
180
+
181
+ $$
182
+ T C = T L / v L P I P S. \tag {9}
183
+ $$
184
+
185
+ Due to the small changes between two consecutive views, for each video we use frame interval 1, 2 and 4 in between to evaluate view-consistency. Note that a lower TC value is better.
186
+
187
+ # 4.2. Quantitative and qualitative results.
188
+
189
+ We evaluate the performance of the proposed method on both the AFHQ animal and CelebA human face dataset. As reported in Table 1, in terms of TC the proposed method achieves the best score on two datasets. For example, we
190
+
191
+ ![](images/eaf356b46896241f4df6c80258727e840afe23a669cf43b2da75e5abcda410e1.jpg)
192
+ Figure 6. Interpolation between the dog and wildlife classes.
193
+
194
+ have 3.743 TC on CelebA-HQ, which is better than StarGANv2 (10.250 TC). This indicates that our method dramatically improves consistency. As reported in Table 1 (up), across both datasets, the proposed method consistently outperforms the baselines with significant gains in terms of FID and LPIPS, except for StarGANv2 which obtains superior results. However, on AFHQ we achieve better FID score than StarGANv2. Kunhee et al. [23] reports the unconditional FID ((unc)FID) value which is computed between synthesized images and training samples instead of each class. As reported in Table 1 (bottom), We are able to achieve completing results on uncFID metrics. Note that while 2D I2I translation (e.g., StarGANv2) can obtain high-quality for each image, they cannot synthesize images of the same scene with 3D consistency, and suffers from unrealistic shape/identity changes when changing the viewpoint, which are especially notable when looking at a video.
195
+
196
+ In Figures 1,4, we perform 3D-aware I2I translation. When changing the input viewpoint (Figure 4 (first two columns)), the outputs of StarGANv2 do not maintain the correct head pose, and underestimate the pose changes with respect to the frontal view. To estimate that this is actually the case, we also compute the diversity (i.e., vLPIPS) in a single video sequence. For example, both StarGANv2 and our method are 0.032 and 0.101 on CelebA-HQ. This confirms that the diversity (due to pose changes) is lowest for StarGANv2. More clearly showing the limitations of standard I2I methods for 3D-aware I2I, we observe that StarGANv2 suffers from unrealistic changes when changing the viewpoint. For example, when translating the class cat to wildlife, the generated images changes from wolf to leop
197
+
198
+ ard when varying the viewpoint (Figure 4 (third and fourth columns)). Also, the main target class characteristics, such as ears, are not geometrically realistic, leading to unrealistic 3D scene videos. Our method, however, eliminates these shortcomings and performs efficient high-resolution image translation with high 3D-consistency, which preserves the input image pose and changes the style of the output images. We show high-resolution images $(1024 \times 1024)$ on Supp. Mat..
199
+
200
+ # 4.3. Ablation study
201
+
202
+ Conditional 3D-aware generative architecture In this experiment, we verify our network design by comparing it with two alternative network designs. As shown in Figure 3(up), we explore a naive strategy: using one mapping which takes as input the concatenated class embedding and the noise. In this way, the fully connected network $F$ outputs the class-specific latent code $w$ , which is fed into the fully connected network $F$ to output the class-specific representation $f$ . Here, both the latent code $w$ and the representation $f$ are decided by the same class. However, when handling 3D-aware multi-class I2I translation task, the feature representation $\hat{f}$ is combined with the latent code $w$ from varying class embeddings, which leads to unrealistic image generation (Figure. 3(up)).
203
+
204
+ As shown in Figure 3(bottom), we utilize two mapping networks without concatenating their outputs like the proposed method. This design guarantees that the output of the fully connected layers $F$ are class-agnostic. We experimentally observe that this model fails to handle 3D-aware generation.
205
+
206
+ Effective training strategy for multi-class 3D-aware generative model. We evaluate the proposed training strategy on AFHQ and CelebA-HQ datasets. We initialize the proposed multi-class 3D I2I architecture from scratch and the proposed method, respectively. As shown on Figure 7 (up), the model trained from scratch synthesizes unrealistic faces on CelebA-HQ dataset, and low quality cats on AFHQ. This is due to the style-based conditional generator which is hard to be optimized and causes mode collapse directly [49]. The proposed training strategy, however, manages to synthesize photo-realistic high-resolution images with high multi-view consistency. This training strategy first performs unconditional learning, which leads to satisfactory generative ability. Thus, we relax the difficulty of directly training the conditional model.
207
+
208
+ Alignment and interpolation. Figure 5 exhibits the outputs of the generator when taking as input the feature representation $\pmb{f}$ and $\hat{\pmb{f}}$ . This confirms that the proposed method successfully aligns the outputs of the fully connected layers $F$ and the adaptor $A$ . Figure 6 reports interpolation by freezing the input images while interpolating the class em
209
+
210
+ ![](images/755a42b84a4c04efaedb54b9602a7203a1c2ff6c438f6557270cb0b943f1f84d.jpg)
211
+ Figure 7. Qualitative results of multi-class StyleNeRF training from scratch (up) and from the proposed strategy (bottom).
212
+
213
+ bedding between two classes. Our model still manages to preserve the view-consistency, and generate high quantity images with even given never seen class embeddings.
214
+
215
+ Techniques for improving the view-consistency. We perform an ablation study on the impact of several design elements on the overall performance of the system, which includes the proposed initialization 3D-aware I2I translation model (Ini.), U-net-like adaptor (Ada.), hierarchical representation constrain (Hrc.) and relative regularization loss (Rrl.). We evaluate these four factors in Table 2. The results show that only using the proposed initialization (the second row of the Table 2) has already improved the view-consistency comparing to StarGANv2 (Table 1). Utilizing either U-net-like adaptor (Ada.) or hierarchical representation constrain (Hrc.) further leads to performance gains. Finally we are able to get the best score when further adding relative regularization loss (Rrl.) to the 3D-aware I2I translation model.
216
+
217
+ # 5. Conclusion
218
+
219
+ In this paper we first explore 3D-aware I2I translation. We decouple the learning process into a multi-class 3D-aware generative model step and a 3D-aware I2I translation step. In the first step, we propose a new multi-class StyleNeRF architecture, and an effective training strategy. We design the 3D-aware I2I translation model with the well-optimized multi-class StyleNeRF model. It inherits the capacity of synthesizing 3D consistent images. In the second step, we propose several techniques to further reduce the view-consistency of the 3D-aware I2I translation.
220
+
221
+ Acknowledgement. We acknowledge the support from the Key Laboratory of Advanced Information Science and Network Technology of Beijing (XDXX2202), and the project supported by Youth Foundation (62202243). We acknowledge the Spanish Government funding for projects PID2019-104174GB-I00, TED2021-132513B-I00.
222
+
223
+ # References
224
+
225
+ [1] Kyungjune Baek, Yunjay Choi, Youngjung Uh, Jaejun Yoo, and Hyunjung Shim. Rethinking the truly unsupervised image-to-image translation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14154-14163, 2021. 3
226
+ [2] Aayush Bansal, Shugao Ma, Deva Ramanan, and Yaser Sheikh. Recycle-gan: Unsupervised video retargeting. In Proceedings of the European conference on computer vision (ECCV), pages 119-135, 2018. 2
227
+ [3] Dina Bashkirova, Ben Usman, and Kate Saenko. Unsupervised video-to-video translation. arXiv preprint arXiv:1806.03698, 2018. 2
228
+ [4] Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J Guibas, Jonathan Tremblay, Sameh Khamis, et al. Efficient geometry-aware 3d generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16123-16133, 2022. 2
229
+ [5] Eric R Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5799-5809, 2021. 2
230
+ [6] Yang Chen, Yingwei Pan, Ting Yao, Xinmei Tian, and Tao Mei. Mocycle-gan: Unpaired video-to-video translation. In Proceedings of the 27th ACM International Conference on Multimedia, pages 647-655, 2019. 2
231
+ [7] Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In CVPR, 2020. 3, 5, 6
232
+ [8] Yu Deng, Jiaolong Yang, Jianfeng Xiang, and Xin Tong. Gram: Generative radiance manifolds for 3d-aware image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10673-10683, 2022. 2
233
+ [9] Matheus Gadelha, Subhransu Maji, and Rui Wang. 3d shape induction from 2d views of multiple objects. In 2017 International Conference on 3D Vision (3DV), pages 402-411. IEEE, 2017. 2
234
+ [10] Abel Gonzalez-Garcia, Joost van de Weijer, and Yoshua Bengio. Image-to-image translation for cross-domain disentanglement. In NeurIPS, pages 1294–1305, 2018. 3
235
+ [11] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NeurIPS, pages 2672-2680, 2014. 2
236
+ [12] Jiatao Gu, Lingjie Liu, Peng Wang, and Christian Theobalt. Stylenerf: A style-based 3d-aware generator for high-resolution image synthesis. arXiv preprint arXiv:2110.08985, 2021. 2, 3
237
+ [13] Paul Henderson, Vagia Tsiminaki, and Christoph H Lampert. Leveraging 2d data to learn textured 3d mesh generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7498-7507, 2020. 2
238
+ [14] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a
239
+
240
+ two time-scale update rule converge to a local nash equilibrium. In NeurIPS, pages 6626-6637, 2017. 6
241
+ [15] Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. In ECCV, pages 172-189, 2018. 2, 6
242
+ [16] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In CVPR, 2017. 2, 3
243
+ [17] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In CVPR, pages 1125-1134, 2017. 4
244
+ [18] Somi Jeong, Youngjung Kim, Eungbean Lee, and Kwanghoon Sohn. Memory-guided unsupervised image-to-image translation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6558-6567, 2021. 3
245
+ [19] Danilo Jimenez Rezende, SM Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, and Nicolas Heess. Unsupervised learning of 3d structure from images. Advances in neural information processing systems, 29, 2016. 2
246
+ [20] Animesh Karnewar and Oliver Wang. *Msg-gan: Multi-scale gradients for generative adversarial networks*. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7799–7808, 2020. 6
247
+ [21] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. In ICLR, 2018. 5
248
+ [22] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, pages 4401-4410, 2019. 2
249
+ [23] Kunhee Kim, Sanghun Park, Eunyeong Jeon, Taehun Kim, and Daijin Kim. A style-aware discriminator for controllable image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18239-18248, 2022. 5, 6, 7
250
+ [24] Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jungkwon Lee, and Jiwon Kim. Learning to discover cross-domain relations with generative adversarial networks. In ICML, 2017. 3
251
+ [25] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *ICLR*, 2014. 5
252
+ [26] Minsu Ko, Eunju Cha, Sungjoo Suh, Huijin Lee, Jae-Joon Han, Jinwoo Shin, and Bohyung Han. Self-supervised dense consistency regularization for image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 18301-18310, June 2022. 2
253
+ [27] Héctor Laria, Yaxing Wang, Joost van de Weijer, and Bogdan Raducanu. Hyper-gan: Transferring unconditional to conditional gans with hypernetworks. arXiv preprint arXiv:2112.02219, 2021. 3
254
+ [28] Hsin-Ying Lee, Hung-Yu Tseng, Jia-Bin Huang, Maneesh Kumar Singh, and Ming-Hsuan Yang. Diverse imaged-to-image translation via disentangled representations. In ECCV, 2018. 2, 6
255
+ [29] Hsin-Ying Lee, Hung-Yu Tseng, Qi Mao, Jia-Bin Huang, Yu-Ding Lu, Maneesh Singh, and Ming-Hsuan Yang. Drit++: Diverse image-to-image translation via disentangled representations. IJCV, pages 1-16, 2020. 3
256
+
257
+ [30] Kangning Liu, Shuhang Gu, Andrés Romero, and Radu Timofte. Unsupervised multimodal video-to-video translation via self-supervised learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1030–1040, 2021. 2
258
+ [31] Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. Neural sparse voxel fields. NeurIPS, 2020. 2
259
+ [32] Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-to-image translation networks. In NeurIPS, pages 700-708, 2017. 3
260
+ [33] Ming-Yu Liu, Xun Huang, Arun Mallya, Tero Karras, Timo Aila, Jaakko Lehtinen, and Jan Kautz. Few-shot unsupervised image-to-image translation. In CVPR, pages 10551-10560, 2019. 3
261
+ [34] Yahui Liu, Enver Sangineto, Yajing Chen, Linchao Bao, Haoxian Zhang, Nicu Sebe, Bruno Lepri, Wei Wang, and Marco De Nadai. Smoothing the disentangled latent style space for unsupervised image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10785-10794, 2021. 5, 6
262
+ [35] Sebastian Lunz, Yingzhen Li, Andrew Fitzgibbon, and Nate Kushman. Inverse graphics gan: Learning to generate 3d shapes from unstructured 2d data. arXiv preprint arXiv:2002.12674, 2020. 2
263
+ [36] Youssef Alami Mejjati, Christian Richardt, James Tompkin, Darren Cosker, and Kwang In Kim. Unsupervised attention-guided image-to-image translation. In NeurIPS, pages 3693-3703, 2018. 3
264
+ [37] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2019. 2
265
+ [38] Mateusz Michalkiewicz, Jhony K. Pontes, Dominic Jack, Mahsa Baktashmotlagh, and Anders Eriksson. Implicit surface representations as layers in neural networks. In The IEEE International Conference on Computer Vision (ICCV), October 2019. 2
266
+ [39] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. arXiv preprint arXiv:2003.08934, 2020. 2
267
+ [40] Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. Hologan: Unsupervised learning of 3d representations from natural images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7588–7597, 2019. 2
268
+ [41] Michael Niemeyer and Andreas Geiger. Campari: Camera-aware decomposed generative neural radiance fields. In 2021 International Conference on 3D Vision (3DV), pages 951-961. IEEE, 2021. 2
269
+ [42] Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11453-11464, 2021. 2
270
+
271
+ [43] Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. arXiv preprint arXiv:1912.07372, 2019. 2
272
+ [44] Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher-Shlizerman. Stylesdf: High-resolution 3d-consistent image and geometry generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13503–13513, 2022. 2
273
+ [45] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. International Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2
274
+ [46] Taesung Park, Alexei A. Efros, Richard Zhang, and Jun-Yan Zhu. Contrastive learning for conditional image synthesis. In ECCV, 2020. 2, 3
275
+ [47] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017. 5
276
+ [48] Songyou Peng, Michael Niemeyer, Lars M. Mescheder, Marc Pollefeys, and Andreas Geiger. Convolutional occupancy networks. ArXiv, abs/2003.04618, 2020. 2
277
+ [49] Axel Sauer, Katja Schwarz, and Andreas Geiger. Styleganx1: Scaling stylegan to large diverse datasets. In ACM SIGGRAPH 2022 Conference Proceedings, pages 1-10, 2022. 7
278
+ [50] Xuning Shao and Weidong Zhang. Spatchgan: A statistical feature based discriminator for unsupervised image-to-image translation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6546-6555, 2021. 3
279
+ [51] Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. Scene representation networks: Continuous 3d-structure-aware neural scene representations. In Advances in Neural Information Processing Systems, pages 1119–1130, 2019. 2
280
+ [52] Ayush Tewari, Xingang Pan, Ohad Fried, Maneesh Agrawala, Christian Theobalt, et al. Disentangled3d: Learning a 3d generative model with disentangled geometry and appearance from monocular images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1516-1525, 2022. 2
281
+ [53] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. Video-to-video synthesis. In NeurIPS, 2018. 2
282
+ [54] Wenjing Wang, Shuai Yang, Jizheng Xu, and Jiaying Liu. Consistent video style transfer via relaxation and regularization. IEEE Transactions on Image Processing, 29:9125-9139, 2020. 6
283
+ [55] Yaxing Wang, Abel Gonzalez-Garcia, David Berga, Luis Herranz, Fahad Shahbaz Khan, and Joost van de Weijer. Minegan: effective knowledge transfer from gans to target domains with few images. In CVPR, 2020. 2
284
+ [56] Yaxing Wang, Abel Gonzalez-Garcia, Joost van de Weijer, and Luis Herranz. SDIT: Scalable and diverse cross-domain image translation. In ACM MM, 2019. 3
285
+
286
+ [57] Yaxing Wang, Salman Khan, Abel Gonzalez-Garcia, Joost van de Weijer, and Fahad Shahbaz Khan. Semi-supervised learning for few-shot image-to-image translation. In CVPR, 2020. 2
287
+ [58] Yaxing Wang, Hector Laria Mantecon, Joost van de Weijer, Laura Lopez-Fuentes, and Bogdan Raducanu. Transferi2i: Transfer learning for image-to-image translation from small datasets, 2021. 3
288
+ [59] Yaxing Wang, Joost van de Weijer, and Luis Herranz. Mix and match networks: encoder-decoder alignment for zeropair image translation. In CVPR, pages 5467-5476, 2018. 2
289
+ [60] Yaxing Wang, Chenshen Wu, Luis Herranz, Joost van de Weijer, Abel Gonzalez-Garcia, and Bogdan Raducanu. Transferring gans: generating images from limited data. In ECCV, pages 218-234, 2018. 2
290
+ [61] Yaxing Wang, Lu Yu, and Joost van de Weijer. Deep2i: Enabling deep hierarchical image-to-image translation by transferring from gans. NeurIPS, 2020. 2, 4
291
+ [62] Yang Xue, Yuheng Li, Krishna Kumar Singh, and Yong Jae Lee. Giraffe hd: A high-resolution 3d-aware generative model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18440-18449, 2022. 2
292
+ [63] Shuai Yang, Liming Jiang, Ziwei Liu, and Chen Change Loy. Unsupervised image-to-image translation with generative prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18332-18341, 2022. 2, 3
293
+ [64] Zili Yi, Hao Zhang, Ping Tan Gong, et al. Dualgan: Unsupervised dual learning for image-to-image translation. In ICCV, 2017. 3
294
+ [65] Xiaoming Yu, Yuanqi Chen, Shan Liu, Thomas Li, and Ge Li. Multi-mapping image-to-image translation via learning disentanglement. In NeurIPS, pages 2990-2999, 2019. 2, 3
295
+ [66] Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492, 2020. 2
296
+ [67] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586-595, 2018. 6
297
+ [68] Xuanmeng Zhang, Zhedong Zheng, Daiheng Gao, Bang Zhang, Pan Pan, and Yi Yang. Multi-view consistent generative adversarial networks for 3d-aware image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18450-18459, 2022. 2
298
+ [69] Peng Zhou, Lingxi Xie, Bingbing Ni, and Qi Tian. Cips-3d: A 3d-aware generator of gans based on conditionally-independent pixel synthesis. arXiv preprint arXiv:2110.09788, 2021. 2
299
+ [70] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, pages 2223-2232, 2017. 2, 3
300
+
301
+ [71] Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darryll, Alexei A Efros, Oliver Wang, and Eli Shechtman. Toward multimodal image-to-image translation. In NeurIPS, pages 465-476, 2017. 3
3dawaremulticlassimagetoimagetranslationwithnerfs/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0fd33203c4ec9292ab8f5907cd8d282434762fab0bfcae00debed7c4973c01f
3
+ size 1001715
3dawaremulticlassimagetoimagetranslationwithnerfs/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:680a878c30b9bfcfdea2c209b2b6aba2c392372fa6ee31fb764bc5ffaa464f09
3
+ size 434566
3dawareobjectgoalnavigationviasimultaneousexplorationandidentification/e3176243-c1cd-415f-8bca-116983524509_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:402825ae88f9620b3cee30c586abc817ffb532bef834d68d58c4f000a85e0291
3
+ size 80362
3dawareobjectgoalnavigationviasimultaneousexplorationandidentification/e3176243-c1cd-415f-8bca-116983524509_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a30079cf0f694a86d05edcaf25b54c31b78460bb65b7deacd461104dc8ab639a
3
+ size 102520