Chelsea707 commited on
Commit
4a001d6
·
verified ·
1 Parent(s): d137bd3

Add Batch d72a813e-291b-44b2-a1bc-144ecebb52e1

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/2b1bd396-d126-4238-bc3e-568c8451ac73_content_list.json +3 -0
  2. NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/2b1bd396-d126-4238-bc3e-568c8451ac73_model.json +3 -0
  3. NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/2b1bd396-d126-4238-bc3e-568c8451ac73_origin.pdf +3 -0
  4. NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/full.md +759 -0
  5. NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/images.zip +3 -0
  6. NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/layout.json +3 -0
  7. NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/b6141afb-6183-4c95-9870-399c132ba26a_content_list.json +3 -0
  8. NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/b6141afb-6183-4c95-9870-399c132ba26a_model.json +3 -0
  9. NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/b6141afb-6183-4c95-9870-399c132ba26a_origin.pdf +3 -0
  10. NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/full.md +800 -0
  11. NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/images.zip +3 -0
  12. NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/layout.json +3 -0
  13. NeurIPS/2025/Walking the Schrödinger Bridge_ A Direct Trajectory for Text-to-3D Generation/0f83e49a-64ce-4487-99d8-879796e12187_content_list.json +3 -0
  14. NeurIPS/2025/Walking the Schrödinger Bridge_ A Direct Trajectory for Text-to-3D Generation/0f83e49a-64ce-4487-99d8-879796e12187_model.json +3 -0
  15. NeurIPS/2025/Walking the Schrödinger Bridge_ A Direct Trajectory for Text-to-3D Generation/0f83e49a-64ce-4487-99d8-879796e12187_origin.pdf +3 -0
  16. NeurIPS/2025/Walking the Schrödinger Bridge_ A Direct Trajectory for Text-to-3D Generation/full.md +627 -0
  17. NeurIPS/2025/Walking the Schrödinger Bridge_ A Direct Trajectory for Text-to-3D Generation/images.zip +3 -0
  18. NeurIPS/2025/Walking the Schrödinger Bridge_ A Direct Trajectory for Text-to-3D Generation/layout.json +3 -0
  19. NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/ac5eb2f1-66f9-4927-a844-c392b6834faf_content_list.json +3 -0
  20. NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/ac5eb2f1-66f9-4927-a844-c392b6834faf_model.json +3 -0
  21. NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/ac5eb2f1-66f9-4927-a844-c392b6834faf_origin.pdf +3 -0
  22. NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/full.md +0 -0
  23. NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/images.zip +3 -0
  24. NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/layout.json +3 -0
  25. NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/52c771b2-b6d2-47fa-9bec-199f0a35100b_content_list.json +3 -0
  26. NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/52c771b2-b6d2-47fa-9bec-199f0a35100b_model.json +3 -0
  27. NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/52c771b2-b6d2-47fa-9bec-199f0a35100b_origin.pdf +3 -0
  28. NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/full.md +0 -0
  29. NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/images.zip +3 -0
  30. NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/layout.json +3 -0
  31. NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/af368978-d2c7-4003-9396-5a0b96abab81_content_list.json +3 -0
  32. NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/af368978-d2c7-4003-9396-5a0b96abab81_model.json +3 -0
  33. NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/af368978-d2c7-4003-9396-5a0b96abab81_origin.pdf +3 -0
  34. NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/full.md +684 -0
  35. NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/images.zip +3 -0
  36. NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/layout.json +3 -0
  37. NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/6ebde3fc-a9da-44da-b825-48b8ec933d4f_content_list.json +3 -0
  38. NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/6ebde3fc-a9da-44da-b825-48b8ec933d4f_model.json +3 -0
  39. NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/6ebde3fc-a9da-44da-b825-48b8ec933d4f_origin.pdf +3 -0
  40. NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/full.md +0 -0
  41. NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/images.zip +3 -0
  42. NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/layout.json +3 -0
  43. NeurIPS/2025/Wasserstein Transfer Learning/63f6d0c7-3853-4ed7-9971-594977c35050_content_list.json +3 -0
  44. NeurIPS/2025/Wasserstein Transfer Learning/63f6d0c7-3853-4ed7-9971-594977c35050_model.json +3 -0
  45. NeurIPS/2025/Wasserstein Transfer Learning/63f6d0c7-3853-4ed7-9971-594977c35050_origin.pdf +3 -0
  46. NeurIPS/2025/Wasserstein Transfer Learning/full.md +0 -0
  47. NeurIPS/2025/Wasserstein Transfer Learning/images.zip +3 -0
  48. NeurIPS/2025/Wasserstein Transfer Learning/layout.json +3 -0
  49. NeurIPS/2025/Watch and Listen_ Understanding Audio-Visual-Speech Moments with Multimodal LLM/85ec6fc7-4946-42fc-bcea-a7c0495ae730_content_list.json +3 -0
  50. NeurIPS/2025/Watch and Listen_ Understanding Audio-Visual-Speech Moments with Multimodal LLM/85ec6fc7-4946-42fc-bcea-a7c0495ae730_model.json +3 -0
NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/2b1bd396-d126-4238-bc3e-568c8451ac73_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c31941e5f524bcdbc91e2393d86ea6ccc2c2ac35661823da7d1b5638f615b30
3
+ size 158298
NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/2b1bd396-d126-4238-bc3e-568c8451ac73_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29d5e05111674857d524122408ba064f30b2027c0a2d8793324cdbe72a2f0a80
3
+ size 206942
NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/2b1bd396-d126-4238-bc3e-568c8451ac73_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91663ec93bb14dbec7de48a4704788f761a09939b1370261d28c6ef3c47fac35
3
+ size 46210820
NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/full.md ADDED
@@ -0,0 +1,759 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WMCopier: Forging Invisible Image Watermarks on Arbitrary Images
2
+
3
+ Ziping Dong $^{1}$ Chao Shuai $^{1}$ Zhongjie Ba $^{1,2*}$ Peng Cheng $^{1,2}$ Zhan Qin $^{1,2}$ Qinglong Wang $^{1,2}$ Kui Ren $^{1,2}$
4
+
5
+ <sup>1</sup>The State Key Laboratory of Blockchain and Data Security, Zhejiang University
6
+ $^{2}$ Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security Hangzhou, Zhejiang, China
7
+
8
+ {dongziping,chaoshuai,zhongjieba,pengcheng,qinzhan,qinglong.wang,kuiren}@zju.edu.cn
9
+
10
+ # Abstract
11
+
12
+ Invisible Image Watermarking is crucial for ensuring content provenance and accountability in generative AI. While Gen-AI providers are increasingly integrating invisible watermarking systems, the robustness of these schemes against forgery attacks remains poorly characterized. This is critical, as forging traceable watermarks onto illicit content leads to false attribution, potentially harming the reputation and legal standing of Gen-AI service providers who are not responsible for the content. In this work, we propose WMCopier, an effective watermark forgery attack that operates without requiring any prior knowledge of or access to the target watermarking algorithm. Our approach first models the target watermark distribution using an unconditional diffusion model, and then seamlessly embeds the target watermark into a non-watermarked image via a shallow inversion process. We also incorporate an iterative optimization procedure that refines the reconstructed image to further trade off the fidelity and forgery efficiency. Experimental results demonstrate that WMCopier effectively deceives both open-source and closed-source watermark systems (e.g., Amazon's system), achieving a significantly higher success rate than existing methods<sup>2</sup>. Additionally, we evaluate the robustness of forged samples and discuss the potential defenses against our attack. Code is available at: https://github.com/holdrain/WMCopier.
13
+
14
+ # 1 Introduction
15
+
16
+ As generative models raise concerns about the potential misuse of such technologies for generating misleading or fictitious imagery [1], watermarking techniques have become a key solution for embedding traceable information into generated content, ensuring its provenance [2]. Driven by government initiatives [3], AI companies, including Google and Amazon, are increasingly adopting invisible watermarking techniques for their generated content [4, 5], owing to the benefits of imperceptibility and robustness [6, 7].
17
+
18
+ However, existing invisible watermark systems are vulnerable to diverse attacks, including detection evasion [8, 9] and forgery [10, 11]. Although the former has received considerable research attention, forgery attacks remain poorly explored. Forgery attacks, where non-watermarked content is falsely detected as watermarked, pose a significant challenge to the reliability of watermarking systems. These attacks maliciously attribute harmful watermarked content to innocent parties, such as Generative AI (Gen-AI) service providers, damaging the reputation of providers [12, 13].
19
+
20
+ Existing watermark forgery attacks are broadly categorized into two scenarios: the black-box setting and the no-box setting. In the black-box setting, the attacker has partial access to the watermarking system: such as knowledge of the specific watermarking algorithm [14], the ability to obtain paired data (clean images and their watermark versions) via the embedding interface [15, 16], or query access to the watermark detector [14]. However, such black-box access is unrealistic in practice, as the watermark embedding process is typically integrated into the generative service itself, rendering it inaccessible to end users, thus disabling paired data acquisition. Moreover, service providers rarely disclose the specific watermarking algorithms they employ [5]. Therefore, our focus is primarily on the no-box setting, where the attacker has neither knowledge of the watermarking algorithm nor access to its implementation, and only a collection of generated images with unknown watermarking schemes is available. Under this setting, Yang et al. [10] attempt to extract the watermark pattern by computing the mean residual between watermarked images and natural images from ImageNet [17], and then directly adding the estimated pattern to forged images at the pixel level. However, this achieves limited performance because it assumes that the watermark signal remains constant across all images. Moreover, its estimation is further hindered by the domain gap between ImageNet images and the unknown clean counterparts of the watermarked samples.
21
+
22
+ Inspired by recent work [18-20], demonstrating that diffusion models serve as powerful priors capable of capturing complex data distributions, we ask a more exploratory question:
23
+
24
+ # Can diffusion models act as copiers for invisible watermarks?
25
+
26
+ To be more precise, can we leverage them to copy the underlying watermark signals embedded in watermarked images?
27
+
28
+ Building on this insight, we propose WMCopier, a no-box watermark forgery attack framework tailored for practical adversarial scenarios. In this setting, the attacker has no prior knowledge of the watermarking scheme used by the provider and only has access to watermarked content generated by the Gen-AI service. Specifically, we first train an unconditional diffusion model on watermarked images to capture their underlying distribution. Then, we perform a shallow inversion to map clean images to their latent representations, followed by a denoising process that injects the watermark signal utilizing the trained diffusion model. To further mitigate artifacts introduced during inversion, we propose a refinement procedure that jointly optimizes image quality and alignment with the target watermark distribution.
29
+
30
+ To evaluate the effectiveness of WMCopier, we perform comprehensive experiments across a range of watermarking schemes, including a closed-source one (Amazon's system). Experimental results demonstrate that our attack achieves a high forgery success rate while preserving excellent visual fidelity. Furthermore, we conduct a comparative robustness analysis between genuine and forged watermarks. Finally, we explore a multi-message defense strategy that provides practical guidance for improving future watermark design and deployment.
31
+
32
+ Our key contributions are summarized as follows:
33
+
34
+ - We propose WMCopier, the first no-box watermark forgery attack based on diffusion models, which forges watermark signals directly from watermarked images without requiring any knowledge of the watermarking scheme.
35
+ - We introduce a shallow inversion strategy and a refinement procedure, which injects the target watermark signal into arbitrary clean images while jointly optimizing image quality and conformity to the watermark distribution.
36
+ - Through extensive experiments, we demonstrate that WMCopier effectively forges a wide range of watermark schemes, achieving superior forgery success rates and visual fidelity including on Amazon's deployed watermarking system.
37
+ - We explore a potential defense strategy that provides insights to improve future watermarking systems.
38
+
39
+ # 2 Preliminary
40
+
41
+ # 2.1 DDIM and DDIM Inversion
42
+
43
+ DDIM. Diffusion models generate data by progressively adding noise in the forward process and then denoising from pure Gaussian noise during the reverse process. The forward diffusion process is
44
+
45
+ modeled as a Markov chain, where Gaussian noise is gradually added to the data $x_0$ over time. At each time step $t$ , the noised sample $x_{t}$ can be obtained in closed form as:
46
+
47
+ $$
48
+ x _ {t} = \sqrt {\alpha_ {t}} x _ {0} + \sqrt {1 - \alpha_ {t}} \epsilon , \quad \epsilon \sim \mathcal {N} (0, \mathbb {I}) \tag {1}
49
+ $$
50
+
51
+ where $\alpha_{t}$ is the noise schedule, and $\epsilon$ is standard Gaussian noise.
52
+
53
+ DDIM [21] is a deterministic sampling approach for diffusion models, enabling faster sampling and inversion through deterministic trajectory tracing. In DDIM sampling, the denoising process starts from Gaussian noise $x_{T} \sim \mathcal{N}(0,\mathbb{I})$ and proceeds according to:
54
+
55
+ $$
56
+ x _ {t - 1} = \sqrt {\alpha_ {t - 1}} \cdot \left(\frac {x _ {t} - \sqrt {1 - \alpha_ {t}}}{\sqrt {\alpha_ {t}}}\right) + \sqrt {1 - \alpha_ {t - 1}} \cdot \epsilon_ {\theta} (x _ {t}, t) \tag {2}
57
+ $$
58
+
59
+ for $t = T, T - 1, \dots, 1$ , eventually yielding the generated sample $x_0$ . Here, $\epsilon_{\theta}(x_t, t)$ denotes a neural network, which is trained to predict the noise added to $x_0$ at step $t$ during the forward process, by minimizing the following objective:
60
+
61
+ $$
62
+ \mathbb {E} _ {x _ {0} \sim p _ {\mathrm {d a t a}}, t \sim \mathcal {U} (1, T), \epsilon \sim \mathcal {N} (0, \mathbb {I})} \left[ \| \epsilon_ {\theta} (x _ {t}, t) - \epsilon \| _ {2} ^ {2} \right]. \tag {3}
63
+ $$
64
+
65
+ DDIM Inversion. DDIM inversion [22, 21] allows an image $x_0$ to be approximately mapped back to its corresponding latent representation $x_{t}$ at step $t$ by reversing the sampling trajectory. DDIM inversion has found widespread applications in computer vision, such as image editing [22, 23] and watermarking [24, 25]. We denote this inversion procedure from $x_0$ to $x_{t}$ as:
66
+
67
+ $$
68
+ x _ {t} = \operatorname {I n v e r s i o n} \left(x _ {0}, t\right). \tag {4}
69
+ $$
70
+
71
+ # 2.2 Invisible Image Watermarking
72
+
73
+ Invisible image watermarking helps regulators and the public identify AI-generated content and trace harmful outputs (such as NSFW or misleading material) back to the responsible service provider, thus enabling accountability attribution. Specifically, the watermark message inserted by the service provider typically serves as a model identifier [26]. For example, Stability AI embeds the identifier StableDiffusionV1 by converting it into a bit string and encoding it as a watermark [27]. A list of currently deployed real-world watermarking systems is provided in Table 6 in Appendix B.
74
+
75
+ Invisible image watermarking typically involves three stages: embedding, extraction, and verification. Given a clean (non-watermarked) image $x \in \mathbb{R}^{H \times W \times 3}$ and a binary watermark message $m \in \{0,1\}^K$ , the embedding process uses an encoder $E$ to produce a watermarked image:
76
+
77
+ $$
78
+ x ^ {w} = E (x, m).
79
+ $$
80
+
81
+ During the extraction stage, a detector $D$ attempts to recover the embedded message from $x^{w}$ :
82
+
83
+ $$
84
+ m ^ {\prime} = D (x ^ {w}).
85
+ $$
86
+
87
+ During the verification stage, the extracted message $m'$ is evaluated against the original message $m$ using a verification function $V$ , which measures their similarity in terms of bit accuracy. An image is considered watermarked if its bit accuracy exceeds a predefined threshold $\rho$ , where $\rho$ is typically selected to achieve a desired false positive rate (FPR). For instance, to achieve a FPR below 0.05 for a 40-bit message, $\rho$ should be set to $\frac{26}{40}$ , based on a Bernoulli distribution assumption [28]. Formally, the verification function is defined as:
88
+
89
+ $$
90
+ V \left(m, m ^ {\prime}, \rho\right) = \left\{ \begin{array}{l l} \text {W a t e r m a r k e d ,} & \text {i f B i t - A c c u r a c y} \left(m, m ^ {\prime}\right) \geq \rho ; \\ \text {N o n - W a t e r m a r k e d ,} & \text {o t h e r w i s e .} \end{array} \right. \tag {5}
91
+ $$
92
+
93
+ # 3 Threat Model
94
+
95
+ In a watermark forgery attack, the attacker forges the watermark of a service provider onto clean images, including malicious or illegal content. As a result, these forged images may be incorrectly attributed to the service provider, leading to reputation harm and legal ramifications.
96
+
97
+ Attacker's Goal. The attacker aims to produce a forged watermarked image $x^{f}$ that visually resembles a given clean image $x$ , yet is detected by detector $D$ as containing a target watermark
98
+
99
+ ![](images/0b29c24a62672776d960864ce9ad8499df838ccb31e2360ac31a9cb321912dad.jpg)
100
+ Figure 1: The pipeline of WMCopier. The WMCopier consists of three stages. In the first stage, an unconditional diffusion model is trained to estimate the watermark distribution. In the second stage, the estimated watermark is injected into a non-watermarked image using shallow inversion and denoising. Finally, a refinement procedure is applied to mitigate artifacts and ensure conformity to the target watermark distribution $p_w(x)$ .
101
+
102
+ message $m$ . Specifically, visual consistency is required to retain the original (possibly harmful) semantic content and to avoid visible artifacts that may reveal the attack.
103
+
104
+ Attacker's Capability. We consider a threat model under the no-box setting:
105
+
106
+ - The attacker does not know the target watermarking scheme and its internal parameters. They have no access to embed watermarks into their own images and the corresponding detection pipeline.
107
+ - The attacker can collect a subset of watermarked images from AI-generated content platforms (e.g., PromptBase [29], PromptHero [30]) or directly use the target Gen-AI service.
108
+ - The attacker assumes a static watermarking scheme, i.e., the service provider does not alter the watermarking scheme during the attack period.
109
+
110
+ # 4 WMCopier
111
+
112
+ In this section, we introduce WMCopier, a watermark forgery attack pipeline consisting of three stages: (1) Watermark Estimation, (2) Watermark Injection, and (3) Refinement. An overview of the proposed framework is illustrated in Figure 1.
113
+
114
+ # 4.1 Watermark Estimation
115
+
116
+ Diffusion models are used to fit a plausible data manifold [21, 31, 32] by optimizing Equation 3. The noise predictor $\epsilon_{\theta}(x_t,t)$ approximates the conditional expectation of the noise:
117
+
118
+ $$
119
+ \epsilon_ {\theta} \left(x _ {t}, t\right) \approx \mathbb {E} [ \epsilon \mid x _ {t} ] := \hat {\epsilon} \left(x _ {t}\right), \tag {6}
120
+ $$
121
+
122
+ which effectively turns $\epsilon_{\theta}$ into a regressor for the conditional noise distribution.
123
+
124
+ Now consider a clean image $x$ and its watermarked version $x^{w} = x + w(w)$ , where $w$ denotes the embedded watermark signal, which can also be interpreted as the perturbation introduced by the embedding process. During the forward diffusion process, we have:
125
+
126
+ $$
127
+ x _ {t} ^ {w} = \sqrt {\alpha_ {t}} (x + w) + \sqrt {1 - \alpha_ {t}} \epsilon = x _ {t} + \sqrt {\alpha_ {t}} w, \tag {7}
128
+ $$
129
+
130
+ where $x_{t}$ is the noisy version of the clean image at step $t$ . The presence of the additive term $\sqrt{\alpha_t} w$ implies that the input to the noise predictor carries a watermark-dependent shift. As a result, the predicted noise satisfies:
131
+
132
+ $$
133
+ \epsilon_ {\theta} \left(x _ {t} ^ {w}, t\right) = \hat {\epsilon} \left(x _ {t} ^ {w}\right) = \hat {\epsilon} \left(x _ {t} + \sqrt {\alpha_ {t}} w\right) \approx \hat {\epsilon} \left(x _ {t}\right) + \delta_ {t} (w), \tag {8}
134
+ $$
135
+
136
+ ![](images/f3b8b12e6228e209242e224d040f348c9091b6e014d6f130bb92edd11500e45f.jpg)
137
+ Figure 2: Watermark detectability of four open-source watermarking schemes throughout the diffusion and denoising processes ( $T = 1000$ ). As a reference, the bit accuracy of non-watermarked images remains around 0.5.
138
+
139
+ ![](images/d65d2e10e1eb26542e5a354c4b68d1dda4bfdfc35ad5fbfa3539674b9e41be93.jpg)
140
+
141
+ where $\delta_t(w)$ denotes the systematic prediction bias introduced by the watermark signal. These biases accumulate subtly at each denoising step, gradually steering the model's output distribution toward the watermarked distribution $p_w(x)$ .
142
+
143
+ To exploit this behavior, we construct an auxiliary dataset $\mathcal{D}_{\mathrm{aux}} = \{x^w | x^w \sim p_w(x)\}$ , where each image contains an embedded watermark message $m$ . We then train an unconditional diffusion model $\mathcal{M}_{\theta}$ on $\mathcal{D}_{\mathrm{aux}}$ .
144
+
145
+ Our goal is to obtain forged images $x^{f}$ with watermark signals while preserving the semantic content of a clean image $x$ . Therefore, given the pretrained model $\mathcal{M}_{\theta}$ and a clean image $x$ , we first apply DDIM inversion to obtain a latent representation $x_{T}$ :
146
+
147
+ $$
148
+ x _ {T} = \operatorname {I n v e r s i o n} (x, T). \tag {9}
149
+ $$
150
+
151
+ The latent representation retains semantic information about the clean image. Starting from $x_{T}$ , we apply the denoising process described in Equation 2 to obtain the forged image $x^{f}$ , where the bias in Equation 8 naturally guides the denoising process toward the distribution of watermarked images.
152
+
153
+ # 4.2 Watermark Injection
154
+
155
+ We observe that the reconstructed images with full-step inversion suffer from severe quality degradation, as illustrated in the top row of Figure 3. This phenomenon is attributed to the fact that the inversion of images tends to accumulate reconstruction errors when the input clean images are out of the training data distribution, especially as the inversion depth increases [22, 33, 21]. To mitigate this, we investigate the watermark detectability in watermarked images with four open-source watermarking methods throughout the diffusion and denoising processes. As illustrated in Figure 2, the watermark signal tends to be destroyed gradually during the shallow steps (e.g., $t \leq 400$ for $T = 1000$ ), Consequently, the watermark signal is restored during these denoising steps.
156
+
157
+ Therefore, we propose a shallow inversion strategy that performs the inversion process up to an early timestep $T_{S} < T$ . By skipping deeper diffusion steps that contribute minimally to watermark injection yet substantially distort image semantics, our method effectively preserves the visual fidelity of reconstructed images while ensuring reliable watermark injection.
158
+
159
+ # 4.3 Refinement
160
+
161
+ Although shallow inversion effectively reduces reconstruction errors, forged images may still exhibit minor artifacts (as shown in Figure 3) that cause the forged images to be visually distinguishable, thus exposing the forgery. To address this, we propose a refinement procedure to adjust the forged image $x^{f}$ , defined as:
162
+
163
+ $$
164
+ x ^ {f (i + 1)} = x ^ {f (i)} + \eta \nabla_ {x ^ {f (i)}} \left[ \log p _ {w} \left(x ^ {f (i)}\right) - \lambda \| x ^ {f (i)} - x \| ^ {2} \right], i \in \{0, 1, \dots , L \} \tag {10}
165
+ $$
166
+
167
+ where $\eta$ is the step size, $\lambda$ balances semantic fidelity and watermark injection and $L$ is the optimization iterations. The log-likelihood $\log p_w(x^f)$ constrains the samples to lie in regions of high probability
168
+
169
+ ![](images/dafc93e601a1b9782ab576ce181f5be6e338698c2a26f7b4afe2451f22bd94cd.jpg)
170
+ Figure 3: Forged samples generated using full-step inversion, shallow inversion, and shallow inversion with refinement. The first row shows results from full-step inversion ( $T_{S} = T = 100$ ), where the semantic content of the original clean image is heavily disrupted. The second row corresponds to shallow inversion ( $T_{S} = 40$ , $T = 100$ ), which introduces only slight artifacts. The third row demonstrates shallow inversion with refinement, where these artifacts are further reduced.
171
+
172
+ under the watermarked image distribution $p_w(x)$ , while the mean squared error (MSE) term $\| x^{f(i)} - x\| ^2$ ensures that the refined image remains similar to the clean image $x$ . Since the distribution $p_w(x)$ and the conditional noise distribution $p_w^t (x_t)$ are nearly identical at a low noise step $t_l$ , the score function $\nabla \log p_w(x)$ can be approximated by $\nabla \log p_w^t (x_t)$ . This score can be estimated using a pre-trained diffusion model $\mathcal{M}_{\theta}$ [34, 35], as defined in Equation 11, where $x_{t}^{f} = \sqrt{\alpha_{t}} x^{f} + \sqrt{1 - \alpha_{t}}\epsilon$ .
173
+
174
+ $$
175
+ \nabla_ {x ^ {f}} \log p _ {w} (x ^ {f}) \approx \nabla_ {x _ {t _ {l}} ^ {f}} \log p _ {w} ^ {t _ {l}} (x _ {t _ {l}} ^ {f}) \approx - \frac {1}{\sqrt {1 - \alpha_ {t _ {l}}}} \epsilon_ {\theta} (x _ {t _ {l}} ^ {f}, t _ {l}). \tag {11}
176
+ $$
177
+
178
+ By performing this refinement for $L$ iterations, we obtain the forged watermarked image $\hat{x_f}$ after the refinement process. This refinement improves both watermark detectability and the image quality of the forged images, as demonstrated in Figure 3 and Table 11. A complete overview of our WMCopier procedure is summarized in Algorithm 1.
179
+
180
+ # 5 Evaluation
181
+
182
+ Datasets. To simulate real-world watermark forgery scenarios, we train our diffusion model on AI-generated images and apply watermark forgeries to both AI-generated and real photographs. For AI-generated images, we use DiffusionDB [36] that contains a diverse collection of images generated by Stable Diffusion [37]. For real photographs, we adopt three widely-used datasets in computer vision: MS-COCO [38], ImageNet [17], and CelebA-HQ [39].
183
+
184
+ Watermarking Schemes. We evaluate four watermarking schemes: three post-processing methods—DWT-DCT [40], HiDDeN [41], and RivaGAN [42]—an in-processing method, Stable Signature [26], and a close-source watermark system, Amazon [4]. Each watermarking scheme is evaluated using its official default configuration. A comprehensive description of these methods is included in the Appendix C.
185
+
186
+ Attack Parameters and Baselines. For the diffusion model, we adopt DDIM sampling DDIM sampling with a total step $T = 100$ and perform inversion up to step $T_{S} = 40$ . Further details regarding the training of the diffusion model are provided in the Appendix F. For the refinement procedure, we set the trade-off coefficient $\lambda$ as 100, the number of refinement iterations $L$ as 100, a low-noise step $t_{l}$ in the refinement as 1 and the step size $\eta$ as $1 \times 10^{-4}$ by default. To balance the attack performance and the potential cost of acquiring generated images (e.g., fees from GenAI services), we set the size of the auxiliary dataset $D_{\mathrm{aux}}$ to 5,000 in our main experiments. For comparison, we consider the method by Yang et al. [10] that operates under the same no-box setting as ours, and Wang et al. [16] that assumes a black-box setting with access to paired watermarked and clean images.
187
+
188
+ <table><tr><td rowspan="2" colspan="2">Attacks</td><td colspan="4">Black Box</td><td colspan="3">No-Box</td><td colspan="2">No-Box</td></tr><tr><td colspan="4">Wang et al. [16]</td><td colspan="3">Yang et al. [10]</td><td colspan="2">Ours</td></tr><tr><td>Watermark scheme</td><td>Dataset</td><td>PSNR↑</td><td>Forged Bit-acc↑</td><td>FPR@10-6↑</td><td>PSNR↑</td><td>Forged Bit-acc,↑</td><td>FPR@10-6↑</td><td>PSNR↑</td><td>Forged Bit-acc,↑</td><td>FPR@10-6↑</td></tr><tr><td rowspan="4">DWT-DCT</td><td>MS-COCO</td><td>31.33</td><td>74.32%</td><td>57.20%</td><td>32.87</td><td>53.08%</td><td>0.50%</td><td>33.69</td><td>89.19%</td><td>60.20%</td></tr><tr><td>CelebAHQ</td><td>32.19</td><td>81.29%</td><td>50.70%</td><td>32.90</td><td>53.68%</td><td>0.10%</td><td>35.29</td><td>89.46%</td><td>53.20%</td></tr><tr><td>ImageNet</td><td>30.16</td><td>79.64%</td><td>55.10%</td><td>32.92</td><td>51.96%</td><td>0.20%</td><td>33.75</td><td>88.25%</td><td>55.80%</td></tr><tr><td>Diffusiondb</td><td>31.87</td><td>78.22%</td><td>50.80%</td><td>32.90</td><td>51.59%</td><td>0.40%</td><td>33.84</td><td>85.17%</td><td>54.30%</td></tr><tr><td rowspan="4">HiddeN</td><td>MS-COCO</td><td>31.02</td><td>80.56%</td><td>77.60%</td><td>29.68</td><td>63.12%</td><td>0.00%</td><td>31.74</td><td>99.34%</td><td>95.90%</td></tr><tr><td>CelebAHQ</td><td>31.57</td><td>82.28%</td><td>80.20%</td><td>29.79</td><td>61.52%</td><td>0.00%</td><td>33.12</td><td>98.08%</td><td>92.50%</td></tr><tr><td>ImageNet</td><td>31.24</td><td>78.61%</td><td>83.90%</td><td>29.78</td><td>62.66%</td><td>0.00%</td><td>31.76</td><td>98.99%</td><td>94.30%</td></tr><tr><td>Diffusiondb</td><td>30.74</td><td>79.99%</td><td>79.20%</td><td>29.68</td><td>63.36%</td><td>0.00%</td><td>31.46</td><td>98.83%</td><td>94.60%</td></tr><tr><td rowspan="4">RivaGAN</td><td>MS-COCO</td><td>32.94</td><td>93.26%</td><td>88.80%</td><td>29.12</td><td>50.80%</td><td>0.00%</td><td>34.07</td><td>95.74%</td><td>90.90%</td></tr><tr><td>CelebAHQ</td><td>32.64</td><td>93.67%</td><td>93.80%</td><td>29.23</td><td>52.29%</td><td>0.00%</td><td>35.28</td><td>98.61%</td><td>96.00%</td></tr><tr><td>ImageNet</td><td>33.11</td><td>90.94%</td><td>71.40%</td><td>29.22</td><td>50.92%</td><td>0.00%</td><td>33.87</td><td>93.83%</td><td>77.10%</td></tr><tr><td>Diffusiondb</td><td>33.31</td><td>89.69%</td><td>80.60%</td><td>29.12</td><td>48.70%</td><td>0.00%</td><td>34.50</td><td>90.43%</td><td>84.80%</td></tr><tr><td rowspan="4">Stable Signature</td><td>MS-COCO</td><td>28.87</td><td>91.68%</td><td>88.90%</td><td>30.77</td><td>52.67%</td><td>0.00%</td><td>31.29</td><td>98.04%</td><td>94.60%</td></tr><tr><td>CelebAHQ</td><td>32.33</td><td>79.90%</td><td>90.10%</td><td>30.51</td><td>51.73%</td><td>0.00%</td><td>30.54</td><td>96.04%</td><td>100.00%</td></tr><tr><td>ImageNet</td><td>29.59</td><td>85.77%</td><td>85.90%</td><td>30.75</td><td>51.59%</td><td>0.00%</td><td>31.33</td><td>97.03%</td><td>98.60%</td></tr><tr><td>Diffusiondb</td><td>31.11</td><td>89.24%</td><td>92.10%</td><td>30.65</td><td>52.69%</td><td>0.00%</td><td>31.59</td><td>96.24%</td><td>96.60%</td></tr><tr><td colspan="2">Average</td><td>31.50</td><td>84.32%</td><td>76.64%</td><td>30.62</td><td>54.52%</td><td>0.08%</td><td>32.94</td><td>94.58%</td><td>83.71%</td></tr></table>
189
+
190
+ Metrics. We evaluate the visual quality of forged images using Peak Signal-to-Noise Ratio (PSNR), defined as $\mathrm{PSNR}(x, \hat{x^f}) = -10 \cdot \log_{10}\left(\mathrm{MSE}(x, \hat{x^f})\right)$ , where $x$ is the clean image and $\hat{x_f}$ is the forged image after the refinement process. A higher PSNR indicates better visual fidelity, i.e., the forged image is more similar to the original. We evaluate the attack effectiveness in terms of bit accuracy and false positive rate (FPR). Bit accuracy measures the proportion of watermark bits in the extracted message that match the target. FPR refers to the rate at which forged samples are incorrectly identified as valid watermarked images. A higher FPR thus indicates a more successful attack. We report FPR at a threshold calibrated to yield a $10^{-6}$ false positive rate on clean images.
191
+
192
+ # 5.1 Attacks on Open-Source Watermarking Schemes
193
+
194
+ As shown in Table 5, our WMCopier achieves the highest forged bit accuracy and FPR across all watermarking schemes, even surpassing the baseline in the black-box setting. In terms of visual fidelity, all forged images exhibit a PSNR above 30dB, demonstrating that our WMCopier effectively achieves high image quality. For the frequency-domain watermarking DWT-DCT, the bit accuracy is slightly lower compared to other schemes. We attribute this to the inherent limitations of DWT-DCT, which originally exhibits low bit accuracy on certain images. A detailed analysis is presented in Appendix D.1.
195
+
196
+ Table 1: Comparison of our WMCopier with two baselines on four open-source watermarking methods. The cells highlighted in indicate the highest values in each row for the corresponding metrics. Arrows indicate the desired direction of each metric (↑ for higher values being better).
197
+
198
+ <table><tr><td>Watermark Scheme</td><td>Attack</td><td colspan="2">Yang et al. [10]</td><td colspan="2">Ours</td></tr><tr><td rowspan="5">Amazon WM</td><td>Dataset</td><td>PSNR↑</td><td>SR↑/Con.↑</td><td>PSNR↑</td><td>SR↑/Con.↑</td></tr><tr><td>Diffusiondb</td><td>23.42</td><td>29.0%/2</td><td>32.57</td><td>100.0%/2.94</td></tr><tr><td>MS-COCO</td><td>24.18</td><td>32.0%/2</td><td>32.93</td><td>100.0%/2.97</td></tr><tr><td>CelebA-HQ</td><td>24.10</td><td>42.0%/2</td><td>31.84</td><td>100.0%/2.98</td></tr><tr><td>ImageNet</td><td>23.95</td><td>28.0%/2</td><td>32.88</td><td>99.0%/2.89</td></tr></table>
199
+
200
+ Table 2: Performance comparison of baseline and WMCopier on Amazon Watermark.
201
+
202
+ ![](images/b69f3ad513e70e01c53753c299afa1a231271dbbadc26b0a4e25d7512ffb1c26.jpg)
203
+ Figure 4: Comparison of forged bit accuracy distribution: Yang's method. vs. Ours.
204
+
205
+ # 5.2 Attacks on Closed-Source Watermarking Systems
206
+
207
+ In this subsection, we evaluate the effectiveness of our attack and Yang's method in attacking the Amazon watermarking scheme. The results are shown in Table 2. The success rate (SR), which represents the proportion of images detected as watermarked, and the confidence levels (Con.) returned by the API, are used to evaluate the effectiveness of the attacks on deployed watermarking systems. Compared with Yang's method, our attack achieves superior performance in terms of
208
+
209
+ ![](images/2cc3f446a7d9a0831cc5b07413d80f66ea9059e3fcb86bc509863c6ad0bcbc02.jpg)
210
+ Figure 5: Effect of refinement iterations $L$ (left) and trade off coefficient $\lambda$ (right) on PSNR and Bit-Accuracy under our forgery attacks, with fixed $\eta = 10^{-4}$ .
211
+
212
+ ![](images/3d664d6379ccc833d538189745ad29be3c2e576460f7922c589d3466468affb5.jpg)
213
+
214
+ both visual fidelity and forgery effectiveness. Specifically, our method achieves an average PSNR exceeding 30dB and a success rate(SR) close to $100\%$ , whereas Yang's method typically results in PSNR values below 25dB and SR ranging from $28\%$ to $42\%$ .
215
+
216
+ Furthermore, our forged images generally receive a confidence level of 3—the highest rating defined by Amazon's watermark detection API—while Yang's results consistently remain at level 2. Since Amazon does not disclose the exact computation of the confidence score, we guess that it may correlate with bit accuracy, based on common assumptions [28]. To further investigate this, we analyzed the distribution of forged bit accuracy of both our method and Yang's on a open-source watermarking scheme. As shown in Figure 4, our method achieves over $80\%$ bit accuracy on RivaGan, significantly outperforming Yang's method, which remains below $70\%$ .
217
+
218
+ # 5.3 Ablation Study
219
+
220
+ To evaluate the impact of parameter choices on image quality and forgery effectiveness, we conduct two sets of ablation studies by varying (i) the number of refinement optimization steps $L$ and (ii) the trade-off coefficient $\lambda$ . As shown in Figure 5, increasing $L$ initially improves both PSNR and forged bit accuracy, with performance saturating beyond $L = 100$ . In contrast, larger $\lambda$ values continuously enhance PSNR but lead to a slight degradation in bit accuracy, likely due to over-regularization. While higher PSNR values generally indicate better visual fidelity, we note that visible artefacts may still occur even at elevated PSNR levels. Nevertheless, since an attacker may prioritize forgery success over perceptual quality, we adopt $\lambda = 100$ in our main experiments. The results presented in Table 11 in Appendix E further validate the effectiveness of the refinement process.
221
+
222
+ # 5.4 Robustness
223
+
224
+ To investigate the robustness of the forged images, we evaluated its forged bit accuracy of genuine and forged watermarked images under common image distortions, including Gaussian noise, JPEG compression, Gaussian blur, and brightness adjustment. Since the Stable Signature does not support watermark embedding into arbitrary images, we instead report results on generated images. As shown in Table 3, the forged watermark generally exhibits slightly lower robustness compared to the genuine watermark. While some cases show over $20\%$ degradation (highlighted in red), relying on bit accuracy under distortion for separation is inadequate, as it would substantially compromise the true positive rate (TPR), as discussed in Appendix D.3.
225
+
226
+ # 6 Related Work
227
+
228
+ # 6.1 Image Watermarking
229
+
230
+ Image watermarking techniques can generally be categorized into post-processing and in-processing methods, depending on when the watermark is embedded.
231
+
232
+ Post-processing methods embed watermark messages into images after generation. Non-learning-based methods (e.g., LSB [43], DWT-DCT [40, 44]) suffer from poor robustness under common
233
+
234
+ <table><tr><td rowspan="2">Watermark scheme</td><td rowspan="2">Distortion Dataset</td><td colspan="2">JPEG</td><td colspan="2">Blur</td><td colspan="2">Gaussian Noise</td><td colspan="2">Brightness</td></tr><tr><td>Genuine</td><td>Forged</td><td>Genuine</td><td>Forged</td><td>Genuine</td><td>Forged</td><td>Genuine</td><td>Forged</td></tr><tr><td rowspan="4">DWT-DCT</td><td>MS-COCO</td><td>56.44%</td><td>53.00%</td><td>59.84%</td><td>56.56%</td><td>67.86%</td><td>66.90%</td><td>54.66%</td><td>58.36%</td></tr><tr><td>CelebAHQ</td><td>55.42%</td><td>53.14%</td><td>63.12%</td><td>58.26%</td><td>64.84%</td><td>66.49%</td><td>53.89%</td><td>57.73%</td></tr><tr><td>ImageNet</td><td>56.08%</td><td>52.31%</td><td>59.37%</td><td>54.39%</td><td>68.27%</td><td>67.60%</td><td>54.08%</td><td>57.37%</td></tr><tr><td>Diffusiondb</td><td>58.16%</td><td>53.23%</td><td>62.12%</td><td>55.74%</td><td>66.90%</td><td>64.43%</td><td>54.73%</td><td>56.83%</td></tr><tr><td rowspan="4">HiddenN</td><td>MS-COCO</td><td>58.68%</td><td>58.06%</td><td>78.50%</td><td>71.95%</td><td>54.13%</td><td>49.55%</td><td>82.40%</td><td>78.99%</td></tr><tr><td>CelebAHQ</td><td>57.05%</td><td>55.07%</td><td>79.83%</td><td>69.07%</td><td>48.94%</td><td>46.02%</td><td>83.63%</td><td>73.21%</td></tr><tr><td>ImageNet</td><td>58.86%</td><td>57.83%</td><td>78.20%</td><td>71.34%</td><td>54.10%</td><td>49.57%</td><td>80.95%</td><td>77.40%</td></tr><tr><td>Diffusiondb</td><td>58.57%</td><td>57.61%</td><td>79.69%</td><td>72.89%</td><td>54.41%</td><td>50.19%</td><td>81.53%</td><td>77.66%</td></tr><tr><td rowspan="4">RivaGAN</td><td>MS-COCO</td><td>99.44%</td><td>93.32%</td><td>99.60%</td><td>94.99%</td><td>85.71%</td><td>75.00%</td><td>84.51%</td><td>78.81%</td></tr><tr><td>CelebAHQ</td><td>99.92%</td><td>97.22%</td><td>99.97%</td><td>98.23%</td><td>85.93%</td><td>74.83%</td><td>84.60%</td><td>79.53%</td></tr><tr><td>ImageNet</td><td>98.95%</td><td>92.00%</td><td>99.28%</td><td>93.89%</td><td>84.95%</td><td>74.74%</td><td>82.77%</td><td>77.25%</td></tr><tr><td>Diffusiondb</td><td>96.56%</td><td>84.85%</td><td>97.27%</td><td>86.96%</td><td>77.33%</td><td>66.27%</td><td>79.14%</td><td>71.65%</td></tr><tr><td rowspan="4">StableSignature</td><td>MS-COCO</td><td rowspan="4">93.99%</td><td>89.48%</td><td rowspan="4">86.91%</td><td>68.34%</td><td rowspan="4">73.78%</td><td>67.14%</td><td rowspan="4">92.30%</td><td>88.63%</td></tr><tr><td>CelebAHQ</td><td>86.73%</td><td>65.42%</td><td>65.33%</td><td>86.86%</td></tr><tr><td>ImageNet</td><td>87.73%</td><td>64.88%</td><td>61.79%</td><td>91.41%</td></tr><tr><td>Diffusiondb</td><td>85.69%</td><td>65.45%</td><td>61.60%</td><td>87.45%</td></tr></table>
235
+
236
+ Table 3: Bit Accuracy of the genuine watermark and the forged watermark under various image distortions. The distortion parameters are: Gaussian Noise ( $\sigma = 0.05$ ), JPEG (quality=90), Blur (radius=1), and Brightness (factor=6). Cells with background indicate a degradation gap between $10\%$ and $20\%$ , and cells with background indicate a degradation gap greater than $20\%$ .
237
+
238
+ distortions such as compression and noise. Neural network-based approaches mitigate these issues by combining encoder-decoder architectures and adversarial training [41, 45-47]. However, these methods often rely on heavy training and may generalize poorly to unknown attacks.
239
+
240
+ In-processing methods embed watermarks during image generation, either by modifying training data or model weights [19, 48, 28], or by adjusting specific components such as diffusion decoders [26]. Recent trends explore semantic watermarking, which binds messages to generative semantics (e.g., Tree-Ring [49]; Gaussian shading [50]). However, semantic watermarking has not seen real-world deployment [14]. We discuss the effectiveness of our attack on the semantic watermarking in the Appendix D.2.
241
+
242
+ # 6.2 Watermark Forgery
243
+
244
+ Kutter et al. [51] first introduced the concept, also known as the watermark copy attack, under the assumption that the watermark signal was a fixed constant. While this assumption was reasonable for early handcrafted watermarking methods, it no longer holds for modern neural network-based schemes. Subsequent studies [52, 16, 53, 14] have investigated watermark forgery under either white-box or black-box settings, where the attacker either has full access to the watermarking model or can embed watermarks into their own images. However, these approaches still rely on strong assumptions that may not hold in realistic deployment scenarios.
245
+
246
+ In contrast, the no-box setting assumes that only watermarked images are available to the attacker, without access to the model or embedding process. Yang et al. [10] proposed a heuristic method under this setting by estimating the watermark signal through averaging the residuals between watermarked and clean images, and subsequently re-embedding the estimated pattern at the pixel level. This is the scenario we focus on in this work, as it more accurately reflects practical constraints.
247
+
248
+ # 7 Defense Analysis
249
+
250
+ To enhance the deployed watermarking system, we suggest modifying the existing watermark system by disrupting the ability of diffusion models to model the watermark distribution effectively. Specifically, we propose a multi-message strategy as a simple yet effective countermeasure. Instead of embedding a fixed watermark message, the system randomly selects one from a predefined message pool $m_{1}, m_{2}, m_{3}, \ldots, m_{K}$ for each image. During detection, the detector verifies the presence of any valid message in the pool. This strategy introduces uncertainty into the watermark signal, increasing the entropy of possible watermark patterns and making it substantially more difficult for generative models to learn consistent features necessary for forgery. We implement this defense using different message pool sizes ( $K = 10, 50, 100$ ) and test on 100 images for simplicity.
251
+
252
+ As shown in the Table 4, increasing the value of $K$ leads to the FPR drops to $0\%$ at $K = 50$ and $K = 100$ . We further strengthen our attack by collecting more watermarked images. Specifically, we collect 5,000, 20,000, and 50,000 watermarked samples to evaluate the effect of data volume on this defense. As shown in Table 12, the FPR remained consistently at $0\%$ even as the size of $D_{\mathrm{aux}}$ increased. Therefore, embedding multiple messages proves to be a simple yet effective countermeasure against our attack.
253
+
254
+ Table 4: Performance comparison across different $K$ values.
255
+
256
+ <table><tr><td rowspan="2">Dataset</td><td colspan="3">K=10</td><td colspan="3">K=50</td><td colspan="3">K=100</td></tr><tr><td>PSNR↑</td><td>Forged Bit-acc.↑</td><td>FPR@10-6↑</td><td>PSNR↑</td><td>Forged Bit-acc.↑</td><td>FPR@10-6↑</td><td>PSNR↑</td><td>Forged Bit-acc.↑</td><td>FPR@10-6↑</td></tr><tr><td>MS-COCO</td><td>34.73</td><td>81.63%</td><td>34.00%</td><td>34.62</td><td>69.78%</td><td>0.00%</td><td>34.86</td><td>71.56%</td><td>0.00%</td></tr><tr><td>CelebAHQ</td><td>36.13</td><td>83.41%</td><td>44.00%</td><td>35.89</td><td>71.00%</td><td>0.00%</td><td>35.87</td><td>72.91%</td><td>0.00%</td></tr><tr><td>ImageNet</td><td>34.55</td><td>79.25%</td><td>25.00%</td><td>34.35</td><td>70.09%</td><td>0.00%</td><td>34.58</td><td>71.44%</td><td>0.00%</td></tr><tr><td>Diffusiondb</td><td>35.14</td><td>76.28%</td><td>17.00%</td><td>35.10</td><td>70.66%</td><td>0.00%</td><td>35.40</td><td>72.28%</td><td>0.00%</td></tr></table>
257
+
258
+ Table 5: Performance comparison across datasets with a larger size of $D_{aux}$ for $K = 100$ .
259
+
260
+ <table><tr><td rowspan="2">Dataset</td><td colspan="3">5000</td><td colspan="3">20000</td><td colspan="3">50000</td></tr><tr><td>PSNR↑</td><td>Forged Bit-acc.↑</td><td>FPR@10-6↑</td><td>PSNR↑</td><td>Forged Bit-acc.↑</td><td>FPR@10-6↑</td><td>PSNR↑</td><td>Forged Bit-acc.↑</td><td>FPR@10-6↑</td></tr><tr><td>MS-COCO</td><td>34.86</td><td>71.56%</td><td>0.00%</td><td>34.78</td><td>71.91%</td><td>0.00%</td><td>30.77</td><td>71.94%</td><td>0.00%</td></tr><tr><td>CelebA-HQ</td><td>35.87</td><td>72.91%</td><td>0.00%</td><td>34.15</td><td>72.97%</td><td>1.00%</td><td>27.99</td><td>72.72%</td><td>1.00%</td></tr><tr><td>ImageNet</td><td>34.58</td><td>71.44%</td><td>0.00%</td><td>34.57</td><td>72.56%</td><td>0.00%</td><td>30.47</td><td>72.19%</td><td>0.00%</td></tr><tr><td>DiffusionDB</td><td>35.40</td><td>72.28%</td><td>0.00%</td><td>34.99</td><td>72.34%</td><td>0.00%</td><td>31.15</td><td>72.06%</td><td>0.00%</td></tr></table>
261
+
262
+ # 8 Conclusion
263
+
264
+ We propose WMCopier, a diffusion model-based watermark forgery attack designed for the no-box setting, which leverages the diffusion model to estimate the target watermark distribution and performs shallow inversion to forge watermarks on a specific image. We also introduce a refinement procedure that improves both image quality and forgery effectiveness. Extensive experiments demonstrate that WMCopier achieves state-of-the-art performance on both open-source watermarking and real-world deployed systems. We explore potential defense strategies, a multi-message strategy, offering valuable insights for the future development of AIGC watermarking techniques.
265
+
266
+ # 9 Acknowledge
267
+
268
+ We sincerely thank our anonymous reviewers for their valuable feedback and Amazon AGI's Responsible team for their prompt response. This paper is supported in part by the National Key Research and Development Program of China(2021YFB3100300, 2023YFB2904000 and 2023YFB2904001), the National Natural Science Foundation of China(62441238, 62072395, U20A20178, 62172359 and 62472372), the Zhejiang Provincial Natural Science Foundation of China under Grant(LD24F020010), the Key Research and Development Program of Hangzhou City(2024SZD1A27), and the Key R&D Programme of Zhejiang Province(2025C02264).
269
+
270
+ # References
271
+
272
+ [1] Kayleen Devlin and Joshua Cheetham. Fake trump arrest photos: How to spot an ai-generated image. https://www.bbc.com/news/world-us-canada-65069316, 2023.
273
+ [2] Zhengyuan Jiang, Moyang Guo, Yuepeng Hu, and Neil Zhenqiang Gong. Watermark-based detection and attribution of ai-generated content. arXiv preprint arXiv:2404.04254, 2024.
274
+ [3] Diane Bartz and Krystal Hu. Openai, google, others pledge to watermark ai content for safety, white house says. https://www.reuters.com/technology/openai-google-others-pledge-watermark-ai-content-safety-white-house-2023-07-21/.
275
+ [4] Amazon. Watermark detection for amazon titan image generator now available in amazon bedrock. https://aws.amazon.com/cn/about-aws/whats-new/2024/04/watermark-detection-amazon-titan-image-generator-bedrock/, 2024.
276
+
277
+ [5] Google Deepmind. Synthid: Identifying ai-generated content with synthid. https://deepmind.google/technologies/synthid/, 2023.
278
+ [6] Emilia David. Openai is adding new watermarks to dall-e 3. https://www.theverge.com/2024/2/6/24063954.ai-watermarks-dalle3-openai-content-credentials, 2024.
279
+ [7] Yusuf Mehdi. Announcing microsoft copilot, your everyday ai companion. https://blogs.microsoft.com/blog/2023/09/21/announcing-microsoft-copilot-your-everyday-ai-companion/, 2023.
280
+ [8] Zhengyuan Jiang, Jinghuai Zhang, and Neil Zhenqiang Gong. Evading watermark based detection of ai-generated content. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, pages 1168-1181, 2023.
281
+ [9] Xuandong Zhao, Kexun Zhang, Zihao Su, Saastha Vasan, Ilya Grishchenko, Christopher Kruegel, Giovanni Vigna, Yu-Xiang Wang, and Lei Li. Invisible image watermarks are provably removable using generative ai. Advances in Neural Information Processing Systems, 37:8643-8672, 2024.
282
+ [10] Pei Yang, Hai Ci, Yiren Song, and Mike Zheng Shou. Can simple averaging defeat modern watermarks? Advances in Neural Information Processing Systems, 37:56644-56673, 2024.
283
+ [11] Xuandong Zhao, Sam Gunn, Miranda Christ, Jaiden Fairoze, Andres Fabrega, Nicholas Carlini, Sanjam Garg, Sanghyun Hong, Milad Nasr, Florian Tramer, et al. Sok: Watermarking for ai-generated content. arXiv preprint arXiv:2411.18479, 2024.
284
+ [12] Vinu Sankar Sadasivan, Aounon Kumar, Sriram Balasubramanian, Wenxiao Wang, and Soheil Feizi. Can ai-generated text be reliably detected? arXiv preprint arXiv:2303.11156, 2023.
285
+ [13] Chenchen Gu, Xiang Lisa Li, Percy Liang, and Tatsunori Hashimoto. On the learnability of watermarks for language models. arXiv preprint arXiv:2312.04469, 2023.
286
+ [14] Andreas Müller, Denis Lukovnikov, Jonas Thietke, Asja Fischer, and Erwin Quiring. Black-box forgery attacks on semantic watermarks for diffusion models. arXiv preprint arXiv:2412.03283, 2024.
287
+ [15] Mehrdad Saberi, Vinu Sankar Sadasivan, Keivan Rezaei, Aounon Kumar, Atoosa Chegini, Wenxiao Wang, and Soheil Feizi. Robustness of ai-image detectors: Fundamental limits and practical attacks. arXiv preprint arXiv:2310.00076, 2023.
288
+ [16] Ruowei Wang, Chenguo Lin, Qijun Zhao, and Feiyu Zhu. Watermark faker: towards forgery of digital image watermarking. In 2021 IEEE International Conference on Multimedia and Expo (ICME), pages 1-6. IEEE, 2021.
289
+ [17] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015. doi: 10.1007/s11263-015-0816-y.
290
+ [18] Nicolas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramer, Borja Balle, Daphne Ippolito, and Eric Wallace. Extracting training data from diffusion models. In 32nd USENIX Security Symposium (USENIX Security 23), pages 5253-5270, 2023.
291
+ [19] Ning Yu, Vladislav Skripniuk, Dingfan Chen, Larry S Davis, and Mario Fritz. Responsible disclosure of generative models using scalable fingerprinting. In International Conference on Learning Representations, 2021.
292
+ [20] Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Ngai-Man Cheung, and Min Lin. A recipe for watermarking diffusion models. arXiv preprint arXiv:2303.10137, 2023.
293
+ [21] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020.
294
+
295
+ [22] Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6038–6047, 2023.
296
+ [23] Xuan Ju, Ailing Zeng, Yuxuan Bian, Shaoteng Liu, and Qiang Xu. Direct inversion: Boosting diffusion-based editing with 3 lines of code. arXiv preprint arXiv:2310.01506, 2023.
297
+ [24] Wenda Li, Huijie Zhang, and Qing Qu. Shallow diffuse: Robust and invisible watermarking through low-dimensional subspaces in diffusion models. arXiv preprint arXiv:2410.21088, 2024.
298
+ [25] Huayang Huang, Yu Wu, and Qian Wang. Robin: Robust and invisible watermarks for diffusion models with adversarial optimization. Advances in Neural Information Processing Systems, 37: 3937-3963, 2024.
299
+ [26] Pierre Fernandez, Guillaume Couairon, Hervé Jégou, Matthijs Douze, and Teddy Furon. The stable signature: Rooting watermarks in latent diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22466-22477, 2023.
300
+ [27] StabilityAI. Stable diffusion github repository. https://github.com/Stability-AI/stablediffusion.
301
+ [28] Nils Lukas and Florian Kerschbaum. {PTW}: Pivotal tuning watermarking for {Pre-Trained} image generators. In 32nd USENIX Security Symposium (USENIX Security 23), pages 2241-2258, 2023.
302
+ [29] Promptbase. https://promptbase.com/, 2024.
303
+ [30] Prompthero. https://prompthero.com/midjourney-prompts, 2024.
304
+ [31] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780-8794, 2021.
305
+ [32] John J. Vastola. Generalization through variance: how noise shapes inductive biases in diffusion models. arXiv preprint arXiv:2504.12532, 2025.
306
+ [33] Daniel Garibi, Or Patashnik, Andrey Voynov, Hadar Averbuch-Elor, and Daniel Cohen-Or. Renoise: Real image inversion through iterative noising. In European Conference on Computer Vision, pages 395–413. Springer, 2024.
307
+ [34] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019.
308
+ [35] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020.
309
+ [36] Zijie J Wang, Evan Montoya, David Munechika, Haoyang Yang, Benjamin Hoover, and Duen Horng Chau. Diffusiondb: A large-scale prompt gallery dataset for text-to-image generative models. arXiv preprint arXiv:2210.14896, 2022.
310
+ [37] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022.
311
+ [38] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740-755. Springer, 2014.
312
+ [39] Huaibo Huang, Ran He, Zhenan Sun, Tieniu Tan, et al. Introvae: Introspective variational autoencoders for photographic image synthesis. Advances in neural information processing systems, 31, 2018.
313
+
314
+ [40] Ali Al-Haj. Combined dwt-dct digital image watermarking. Journal of computer science, 3(9): 740-746, 2007.
315
+ [41] Jiren Zhu, Russell Kaplan, Justin Johnson, and Li Fei-Fei. Hidden: Hiding data with deep networks. In Proceedings of the European conference on computer vision (ECCV), pages 657–672, 2018.
316
+ [42] Kevin Alex Zhang, Lei Xu, Alfredo Cuesta-Infante, and Kalyan Veeramachaneni. Robust invisible video watermarking with attention. arXiv preprint arXiv:1909.01285, 2019.
317
+ [43] Deepshikha Chopra, Preeti Gupta, Gaur Sanjay, and Anil Gupta. Lsb based digital image watermarking for gray scale image. IOSR Journal of Computer Engineering, 6(1):36-41, 2012.
318
+ [44] K. A. Navas, Mathews Cheriyan Ajay, M. Lekshmi, Tampy S. Archana, and M. Sasikumar. DWT-DCT-SVD based watermarking. In 2008 3rd International Conference on Communication Systems Software and Middleware and Workshops (COMSWARE '08), pages 271-274. IEEE, January 2008.
319
+ [45] Matthew Tancik, Ben Mildenhall, and Ren Ng. Stegastamp: Invisible hyperlinks in physical photographs. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2117-2126, 2020.
320
+ [46] Han Fang, Zhaoyang Jia, Zehua Ma, Ee-Chien Chang, and Weiming Zhang. PIMoG: An Effective Screen-shooting Noise-Layer Simulation for Deep-Learning-Based Watermarking Network. In Proceedings of the 30th ACM International Conference on Multimedia, pages 2267-2275, Lisboa Portugal, October 2022. ACM.
321
+ [47] Zhaoyang Jia, Han Fang, and Weiming Zhang. Mbrs: Enhancing robustness of dnn-based watermarking by mini-batch of real and simulated jpeg compression. In Proceedings of the 29th ACM international conference on multimedia, pages 41-49, 2021.
322
+ [48] Ning Yu, Vladislav Skripniuk, Sahar Abdelnabi, and Mario Fritz. Artificial Fingerprinting for Generative Models: Rooting Deepfake Attribution in Training Data. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 14428-14437, Montreal, QC, Canada, October 2021. IEEE. ISBN 978-1-66542-812-5. doi: 10.1109/ICCV48922.2021.01418.
323
+ [49] Yuxin Wen, John Kirchenbauer, Jonas Geiping, and Tom Goldstein. Tree-ring watermarks: Fingerprints for diffusion images that are invisible and robust. arXiv preprint arXiv:2305.20030, 2023.
324
+ [50] Zijin Yang, Kai Zeng, Kejiang Chen, Han Fang, Weiming Zhang, and Nenghai Yu. Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models, May 2024. Comment: 17 pages, 11 figures, accepted by CVPR 2024.
325
+ [51] Martin Kutter, Sviatoslav V Voloshynovskiy, and Alexander Herrigel. Watermark copy attack. In Security and Watermarking of Multimedia Contents II, volume 3971, pages 371-380. SPIE, 2000.
326
+ [52] Vitaliy Kinakh, Brian Pulfer, Yury Belousov, Pierre Fernandez, Teddy Furon, and Slava Voloshynovskiy. Evaluation of security of ml-based watermarking: Copy and removal attacks. arXiv preprint arXiv:2409.18211, 2024.
327
+ [53] Guanlin Li, Yifei Chen, Jie Zhang, Jiwei Li, Shangwei Guo, and Tianwei Zhang. Warfare: Breaking the watermark protection of ai-generated content. arXiv e-prints, pages arXiv-2310, 2023.
328
+ [54] Google Deepmind. Imagen 2. https://deepmind.google/technologies/imagen-2/.
329
+ [55] Amazon. Amazon titan foundation models - generative ai. https://aws.amazon.com/cn/bedrock/amazon-models/titan/,.
330
+ [56] Amazon. Amazon titan image generator and watermark detection api are now available in amazon bedrock. https://aws.amazon.com/cn/blogs/aws/amazon-titan-image-generator-and-watermark-detection-api-are-now-available-in-amazon-bedrock/,..
331
+
332
+ # NeurIPS Paper Checklist
333
+
334
+ # 1. Claims
335
+
336
+ Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
337
+
338
+ Answer: [Yes]
339
+
340
+ Justification: The main claims made in the abstract and introduction accurately reflect the paper's contributions and scope.
341
+
342
+ Guidelines:
343
+
344
+ - The answer NA means that the abstract and introduction do not include the claims made in the paper.
345
+ - The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.
346
+ - The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
347
+ - It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
348
+
349
+ # 2. Limitations
350
+
351
+ Question: Does the paper discuss the limitations of the work performed by the authors?
352
+
353
+ Answer: [Yes]
354
+
355
+ Justification: Please see Section G.
356
+
357
+ Guidelines:
358
+
359
+ - The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.
360
+ - The authors are encouraged to create a separate "Limitations" section in their paper.
361
+ - The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.
362
+ - The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.
363
+ - The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.
364
+ - The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.
365
+ - If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.
366
+ - While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.
367
+
368
+ # 3. Theory assumptions and proofs
369
+
370
+ Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
371
+
372
+ Answer: [NA]
373
+
374
+ Justification: The paper does not include theoretical results.
375
+
376
+ # Guidelines:
377
+
378
+ - The answer NA means that the paper does not include theoretical results.
379
+ - All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.
380
+ - All assumptions should be clearly stated or referenced in the statement of any theorems.
381
+ - The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.
382
+ - Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.
383
+ - Theorems and Lemmas that the proof relies upon should be properly referenced.
384
+
385
+ # 4. Experimental result reproducibility
386
+
387
+ Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
388
+
389
+ Answer: [Yes]
390
+
391
+ Justification: The code will be available at the URL mentioned in the abstract.
392
+
393
+ # Guidelines:
394
+
395
+ - The answer NA means that the paper does not include experiments.
396
+ - If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.
397
+ - If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.
398
+ - Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.
399
+ - While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example
400
+ (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.
401
+ (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.
402
+ (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).
403
+ (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.
404
+
405
+ # 5. Open access to data and code
406
+
407
+ Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
408
+
409
+ # Answer: [Yes]
410
+
411
+ Justification: The code will be available at the URL mentioned in the abstract. We use an open-source diffusion model and data, which are cited correctly in the main paper and the appendix.
412
+
413
+ # Guidelines:
414
+
415
+ - The answer NA means that paper does not include experiments requiring code.
416
+ - Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
417
+ - While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).
418
+ - The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
419
+ - The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.
420
+ - The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.
421
+ - At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).
422
+ - Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.
423
+
424
+ # 6. Experimental setting/details
425
+
426
+ Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
427
+
428
+ # Answer: [Yes]
429
+
430
+ Justification: Please see Appendix F and Section 5.3.
431
+
432
+ # Guidelines:
433
+
434
+ - The answer NA means that the paper does not include experiments.
435
+ - The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.
436
+ - The full details can be provided either with the code, in appendix, or as supplemental material.
437
+
438
+ # 7. Experiment statistical significance
439
+
440
+ Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
441
+
442
+ # Answer: [No]
443
+
444
+ Justification: Error bars are not presented because it would be too computationally expensive.
445
+
446
+ # Guidelines:
447
+
448
+ - The answer NA means that the paper does not include experiments.
449
+ - The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.
450
+ - The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).
451
+ - The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)
452
+ - The assumptions made should be given (e.g., Normally distributed errors).
453
+
454
+ - It should be clear whether the error bar is the standard deviation or the standard error of the mean.
455
+ - It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified.
456
+ - For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).
457
+ - If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.
458
+
459
+ # 8. Experiments compute resources
460
+
461
+ Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
462
+
463
+ Answer: [Yes]
464
+
465
+ Justification: Please see Appendix F.
466
+
467
+ Guidelines:
468
+
469
+ - The answer NA means that the paper does not include experiments.
470
+ - The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.
471
+ - The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.
472
+ - The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper).
473
+
474
+ # 9. Code of ethics
475
+
476
+ Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
477
+
478
+ Answer: [Yes]
479
+
480
+ Justification: The research conducted in the paper conform with the NeurIPS Code of Ethics.
481
+
482
+ Guidelines:
483
+
484
+ - The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
485
+ - If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.
486
+ - The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).
487
+
488
+ # 10. Broader impacts
489
+
490
+ Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
491
+
492
+ Answer: [Yes]
493
+
494
+ Justification: Please see Appendix H.
495
+
496
+ Guidelines:
497
+
498
+ - The answer NA means that there is no societal impact of the work performed.
499
+ - If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.
500
+ - Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.
501
+
502
+ - The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.
503
+ - The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.
504
+ - If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).
505
+
506
+ # 11. Safeguards
507
+
508
+ Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
509
+
510
+ Answer: [Yes]
511
+
512
+ Justification: Please see Section 7.
513
+
514
+ Guidelines:
515
+
516
+ - The answer NA means that the paper poses no such risks.
517
+ - Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.
518
+ - Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.
519
+ - We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.
520
+
521
+ # 12. Licenses for existing assets
522
+
523
+ Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
524
+
525
+ Answer: [Yes]
526
+
527
+ Justification: All datasets, codes and models we used are public and cited properly.
528
+
529
+ Guidelines:
530
+
531
+ - The answer NA means that the paper does not use existing assets.
532
+ - The authors should cite the original paper that produced the code package or dataset.
533
+ - The authors should state which version of the asset is used and, if possible, include a URL.
534
+ - The name of the license (e.g., CC-BY 4.0) should be included for each asset.
535
+ - For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.
536
+ - If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.
537
+ - For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.
538
+
539
+ - If this information is not available online, the authors are encouraged to reach out to the asset's creators.
540
+
541
+ # 13. New assets
542
+
543
+ Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
544
+
545
+ Answer: [Yes]
546
+
547
+ Justification: The code and data will be available at the URL mentioned in the abstract.
548
+
549
+ Guidelines:
550
+
551
+ - The answer NA means that the paper does not release new assets.
552
+ - Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.
553
+ - The paper should discuss whether and how consent was obtained from people whose asset is used.
554
+ - At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.
555
+
556
+ # 14. Crowdsourcing and research with human subjects
557
+
558
+ Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
559
+
560
+ Answer: [NA]
561
+
562
+ Justification: The paper does not involve crowdsourcing nor research with human subjects.
563
+
564
+ Guidelines:
565
+
566
+ - The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
567
+ - Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.
568
+ - According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.
569
+
570
+ # 15. Institutional review board (IRB) approvals or equivalent for research with human subjects
571
+
572
+ Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
573
+
574
+ Answer: [NA]
575
+
576
+ Justification: The paper does not involve crowdsourcing nor research with human subjects.
577
+
578
+ Guidelines:
579
+
580
+ - The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
581
+ - Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.
582
+ - We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.
583
+ - For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
584
+
585
+ # 16. Declaration of LLM usage
586
+
587
+ Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required.
588
+
589
+ # Answer: [NA]
590
+
591
+ Justification: The LLM is used only for writing.
592
+
593
+ # Guidelines:
594
+
595
+ - The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components.
596
+ - Please refer to our LLM policy ( https://neurips.cc/Conferences/2025/LLM) for what should or should not be described.
597
+
598
+ # A Algorithm
599
+
600
+ # Algorithm 1 WMCopier
601
+
602
+ Require: Clean image $x$ ; Noise predictor $\epsilon_{\theta}$ of pretrained diffusion model $\mathcal{M}_{\theta}$ ; Inversion steps $T_{S}$ ; Refinement iterations $L$ ; Low noise step $t_l$ for refinement; Step size $\eta$ ; Trade off coefficient $\lambda$ .
603
+
604
+ Ensure: Forged watermarked image $x^{f}$ $x_{T_S} \gets \text{Inversion}(x, T_S) \# \text{Obtain noisy latent at step } T_S \text{ via DDIM inversion}$ $x_{T_S}' \gets x_{T_S} \# \text{Initial the start point of sampling}$
605
+ for $t = T_S, T_S - 1, \ldots, 1$ do # DDIM sampling
606
+ $\epsilon_t \gets \epsilon_\theta(x'_t, t)$ $x_{t-1}' \gets \sqrt{\alpha_{t-1}} \cdot \left( \frac{x'_t - \sqrt{1 - \alpha_t} \cdot \epsilon_t}{\sqrt{\alpha_t}} \right) + \sqrt{1 - \alpha_{t-1}} \cdot \epsilon_t$
607
+ end for
608
+ $x^f \gets x_0'$
609
+ for $i = 1$ to $L$ do # Refinement
610
+ Sample $z \sim \mathcal{N}(0, \mathbf{I})$ $x_{t_l}^{f(i)} \gets \sqrt{\alpha_{t_l}} \cdot x^{f(i)} + \sqrt{1 - \alpha_{t_l}} \cdot z \# \text{Add noise to a low noise step } t_l$ $x^{f(i+1)} \gets x^{f(i)} + \eta \cdot \nabla_{x^{f(i)}}\left(-\frac{1}{\sqrt{1 - \alpha_{t_l}}} \cdot \epsilon_\theta(x_{t_l}^{f(i)}, t_l)\right) - \lambda \cdot \|x^{f(i)} - x\|^2$
611
+ end for
612
+ return $x^{f} \gets x^{f(L)}$
613
+
614
+ # B Real-World Deployment
615
+
616
+ In line with commitments made to the White House, leading U.S. AI companies that provide generative AI services are implementing watermarking systems to embed watermark information into model-generated content before it is delivered to users [3].
617
+
618
+ Google introduced SynthID [5], which adds invisible watermarks to both Imagen 3 and Imagen 2 [54]. Amazon has deployed invisible watermarks on its Titan image generator [4].
619
+
620
+ Meanwhile, OpenAI and Microsoft are transitioning from metadata-based watermarking to invisible methods. OpenAI points out that invisible watermarking techniques are superior to the visible genre and metadata methods previously used in DALL-E 2 and DALL-E 3 [6], due to their imperceptibility and robustness to common image manipulations, such as screenshots, compression, and cropping. Microsoft has announced plans to incorporate invisible watermarks into AI-generated images in Bing [7]. Table 6 summarizes watermarking systems deployed in text-to-image models.
621
+
622
+ Table 6: Watermarking deployment across major Gen-AI service providers.
623
+
624
+ <table><tr><td>Service Provider</td><td>Watermark</td><td>Generative Model</td><td>Deployed</td><td>Detector</td></tr><tr><td>OpenAI</td><td>Invisible</td><td>DALL·E 2 &amp; DALL·E 3</td><td>In Progress</td><td>Unknown</td></tr><tr><td>Google (SynthID)</td><td>Invisible</td><td>Imagen 2 &amp;Imagen 3</td><td>Deployed</td><td>Not Public</td></tr><tr><td>Microsoft</td><td>Invisible</td><td>DALL·E 3 (Bing)</td><td>In Progress</td><td>Unknown</td></tr><tr><td>Amazon</td><td>Invisible</td><td>Titan</td><td>Deployed</td><td>Public</td></tr></table>
625
+
626
+ # C Watermark Schemes
627
+
628
+ # C.1 Open-source Watermarking Schemes
629
+
630
+ DWT-DCT. DWT-DCT [40] is a classical watermarking technique that embeds watermark bits into the frequency domain of the image. It first applies the discrete wavelet transform (DWT) to decompose the image into sub-bands and then performs the discrete cosine transform (DCT) on selected sub-bands.
631
+
632
+ HiDDeN. HiDDeN [41] is a neural network-based watermarking framework using an encoder-decoder architecture. A watermark message is embedded into an image via a convolutional encoder, and a
633
+
634
+ decoder is trained to recover the message. Additionally, a noise simulation layer is inserted between the encoder and decoder to encourage robustness.
635
+
636
+ RivaGAN. RivaGAN embeds watermark messages into video or image frames using a GAN-based architecture. A generator network embeds the watermark into the input image, while a discriminator ensures visual quality.
637
+
638
+ Stable Signature. As an in-processing watermarking technique, Stable Signature [26] couples the watermark message with the parameters of the stable diffusion model. It is an invisible watermarking method proposed by Meta AI, which embeds a unique binary signature into images generated by latent diffusion models (LDMs) through fine-tuning the model's decoder.
639
+
640
+ Setup. In our experiments, all schemes are evaluated under their default configurations, including the default image resolutions (128×128 for HiDDeN, 256×256 for RivaGAN, and 512×512 for both Stable Signature and Amazon), as well as their default watermark lengths (32 bits for DWT-DCT and RivaGAN, 30 bits for HiDDeN, and 48 bits for Stable Signature). With regard to PSNR, we report both the original PSNR of these schemes and the PSNR of our forged samples in Table 7.
641
+
642
+ Table 7: PSNR of watermarking schemes and our forged samples
643
+
644
+ <table><tr><td>Scheme</td><td>DWT-DCT</td><td>HiddeN</td><td>RivaGAN</td><td>Stable Signature</td></tr><tr><td>PSNR (Original)</td><td>38.50</td><td>31.88</td><td>38.61</td><td>31.83</td></tr><tr><td>PSNR (Ours)</td><td>33.69</td><td>31.74</td><td>34.07</td><td>31.29</td></tr></table>
645
+
646
+ # C.2 Closed-Source Watermarking System
647
+
648
+ Among the available options, Google does not open its watermark detection mechanisms to users, making it impossible to evaluate the success of our attack. In contrast, Amazon provides access to its watermark detection for the Titan model [55], allowing us to directly measure the performance of our attack. Therefore, we chose Amazon's watermarking scheme for our experiments. Amazon's watermarking scheme, referred to as Amazon WM, ensures that AI-generated content can be traced back to its source. The watermark detection API detect whether an image is generated by the Titan model and provides a confidence level for the detection<sup>3</sup> This confidence level reflects the likelihood that the image contains a valid watermark, as illustrated in Figure 6.
649
+
650
+ In our experiments, we generated 5,000 images from the Titan model using Amazon Bedrock [56]. Specifically, we used ten different prompts to generate images with the Titan model, which were then employed to carry out our attack. The examples of prompts we used are listed in Figure 7. In this attack, we embedded Amazon's watermark onto four datasets, each containing 100 images. Finally, we submitted the forged images to Amazon's watermark detection API. Additionally, we forged Amazon's watermark on images from non-public datasets, including human-captured photos and web-sourced images, all of which were flagged as Titan-generated.
651
+
652
+ # Results
653
+
654
+ To determine if an image was generated using a Titan Image Generator model, upload an image above and select analyze.
655
+
656
+ ![](images/6d51f14d9e29f8ddcfe2bc2ef62a8623889fedc51abc6594024acd29414aeaea.jpg)
657
+ Watermark detected (Confidence: High)
658
+ Figure 6: Result from Amazon's watermark detection API.
659
+
660
+ Bedrock detected a watermark generated by the Titan Image Generator model on this image.
661
+
662
+ 1. A serene landscape of a misty forest at sunrise, with golden light filtering through the trees and a calm river flowing in the foreground, ultra-realistic and soft lighting.
663
+ 2. A futuristic cityscape at night, with glowing neon lights reflecting on wet streets, flying cars and towering skyscrapers, cyberpunk style, highly detailed.
664
+ 3. A majestic lion standing proudly on a cliff at sunset, with a dramatic orange sky and rolling hills in the background, hyper-realistic, high detail fur texture.
665
+ 4. An abstract painting of swirling vibrant colors, reminiscent of Van Gogh's 'Starry Night', using bold brushstrokes and a mix of blue, yellow, and white.
666
+ 5. A beautiful, tranquil Japanese garden with a koi pond, cherry blossom trees in full bloom, and a traditional tea house, soft sunlight filtering through the branches.
667
+ 6. A fantasy scene of a dragon flying over a medieval castle, with smoke rising from its nostrils and a stormy sky in the background, highly detailed, dark fantasy style.
668
+ 7. A close-up of a dew-covered spiderweb in the morning, with sunlight sparkling on the droplets, extremely detailed, sharp focus on the texture and reflection.
669
+ 8. A peaceful 1920s Parisian street view, featuring cozy outdoor cafes, charming cobblestone pathways, and vintage buildings with intricate architecture.
670
+ 9. An astronaut standing on the surface of Mars, gazing at the Earth in the distance, with red rocky terrain and a clear blue sky, photorealistic, high contrast.
671
+ 10.A magical winter wonderland with snow-covered trees, a frozen lake reflecting the pale blue sky, and soft sunlight peeking through the branches, ultra-realistic and serene.
672
+
673
+ ··
674
+
675
+ Figure 7: Example prompts used for image generation with the Titan model.
676
+
677
+ # D External Experiment Results
678
+
679
+ # D.1 Further Analysis of DWT-DCT Attack Results
680
+
681
+ We observed that DWT-DCT suffers from low bit-accuracy on certain images, which leads to unreliable watermark detection and verification. To reflect a more practical scenario, we assume that the service provider only returns images with high bit accuracy to users to ensure traceability. Specifically, we select 5,000 images with $100\%$ bit accuracy to construct our auxiliary dataset $\mathcal{D}_{aux}$ . We then apply both the original DWTDCT scheme and our attack to add watermarks to clean images from four datasets. As shown in Table 8, our method achieves even higher bit-accuracy than the original watermarking process itself.
682
+
683
+ Table 8: Comparison of bit accuracy between original DWT-DCT and DWT-DCT (Ours).
684
+
685
+ <table><tr><td rowspan="2">Dataset</td><td colspan="2">DWTDCT-Original</td><td colspan="2">DWTDCT-WMCopier</td></tr><tr><td>Bit-acc.↑</td><td>FPR@10-6↑</td><td>Bit-acc.↑</td><td>FPR@10-6↑</td></tr><tr><td>MS-COCO</td><td>82.15%</td><td>56.60%</td><td>89.19%</td><td>60.20%</td></tr><tr><td>CelebA-HQ</td><td>84.70%</td><td>54.70%</td><td>89.46%</td><td>53.20%</td></tr><tr><td>ImageNet</td><td>85.37%</td><td>55.30%</td><td>88.25%</td><td>55.80%</td></tr><tr><td>DiffusionDB</td><td>82.42%</td><td>52.90%</td><td>85.17%</td><td>54.30%</td></tr></table>
686
+
687
+ # D.2 Semantic Watermark
688
+
689
+ Semantic watermarking [49, 50] embeds watermark information that is intrinsically tied to the semantic content of the image. To further investigate the effectiveness of our attack on semantic watermarking, we compare it with the forgery attack proposed by Müller et al. [14], which is specifically designed for semantic watermark schemes. We adopt Treering [49] as the target watermark. As shown in Table 9, both our method and Müller's achieve a $100\%$ false positive rate (FPR) under Treering's default threshold of 0.01. However, our method produces significantly higher forgery quality, with an average PSNR over 30 dB, compared to around 26 dB for Müller's.
690
+
691
+ We also evaluate Müller's method on a non-semantic watermark, Stable Signature. As summarized in Table 10, Müller's approach fails to attack this type of watermark, while our method maintains a high success rate.
692
+
693
+ Table 9: Comparison with Müller et al. [14] and our attack on Treering.
694
+
695
+ <table><tr><td rowspan="2">Dataset</td><td colspan="2">Müller et al. [14]</td><td colspan="2">Ours</td></tr><tr><td>PSNR↑</td><td>FPR@0.01↑</td><td>PSNR↑</td><td>FPR@0.01↑</td></tr><tr><td>MS-COCO</td><td>26.14</td><td>100.00%</td><td>32.72</td><td>100.00%</td></tr><tr><td>CelebA-HQ</td><td>25.22</td><td>100.00%</td><td>31.52</td><td>100.00%</td></tr><tr><td>ImageNet</td><td>26.82</td><td>100.00%</td><td>32.99</td><td>100.00%</td></tr><tr><td>DiffusionDB</td><td>25.19</td><td>100.00%</td><td>32.78</td><td>100.00%</td></tr></table>
696
+
697
+ Table 10: Comparison with Müller et al. [14] and our attack on Stable Signature.
698
+
699
+ <table><tr><td rowspan="2">Dataset</td><td colspan="3">Müller et al. [14]</td><td colspan="3">Ours</td></tr><tr><td>PSNR↑</td><td>Forged Bit-Acc.↑</td><td>FPR@10-6↑</td><td>PSNR↑</td><td>Forged Bit-Acc.↑</td><td>FPR@10-6↑</td></tr><tr><td>MS-COCO</td><td>25.66</td><td>45.70%</td><td>0.00%</td><td>31.29</td><td>98.04%</td><td>94.60%</td></tr><tr><td>CelebA-HQ</td><td>24.73</td><td>51.23%</td><td>0.00%</td><td>30.54</td><td>96.04%</td><td>100.00%</td></tr><tr><td>ImageNet</td><td>25.91</td><td>47.71%</td><td>0.00%</td><td>31.33</td><td>97.03%</td><td>98.60%</td></tr><tr><td>DiffusionDB</td><td>26.12</td><td>48.45%</td><td>0.00%</td><td>31.59</td><td>96.24%</td><td>96.60%</td></tr></table>
700
+
701
+ # D.3 Discrimination of Forged Watermarks by Robustness Gap
702
+
703
+ While the robustness gap between genuine and forged watermarks offers a promising direction for detecting forged samples, we find it is insufficient for reliable discrimination. This limitation becomes particularly evident when genuine samples have already been subjected to mild distortions.
704
+
705
+ In discrimination, samples are classified as forgeries if their bit accuracy falls below a predefined threshold $\kappa$ after applying a single perturbation. Specifically, we apply perturbation $A$ to both genuine and forged watermark images and then distinguish them based on their bit accuracy. However, considering the inherent robustness of the watermarking scheme itself, when genuine watermarked images have already undergone slight perturbation $B$ , the bit accuracy values of genuine and forged samples become indistinguishable. For distortion $A$ , we use Gaussian noise with $\sigma = 0.05$ , while for distortion $B$ , Gaussian noise with $\sigma = 0.02$ is applied. The ROC curve and the bit-accuracy distribution for this case are shown in Figure 8.
706
+
707
+ ![](images/ff03eaf090af03015f5ddcf35061d6b8bd9356bf910cb7535d52eeb0cb1e969a.jpg)
708
+ Figure 8: ROC curve and bit accuracy distribution (KDE) for genuine and forged watermark samples under Gaussian noise.
709
+
710
+ ![](images/d14a234e7cf9b92d67d49cd2ea8086794c673d55c39ce1d58c7729bc19233049.jpg)
711
+
712
+ # E Additional Ablation Studies
713
+
714
+ Table 11 shows that the proposed refinement step substantially improves visual fidelity, as measured by PSNR, while simultaneously enhancing forgery performance (forged-bit accuracy).
715
+
716
+ We also explore the impact of varying the size of $D_{\mathrm{aux}}$ . Specifically, we use 1,000, 5,000, and 10,000 collected RivaGAN watermarked images. As shown in Table 12, larger $D_{\mathrm{aux}}$ generally yields higher forged-bit accuracy and higher FPR across datasets. However, the improvement becomes marginal once the size of $D_{\mathrm{aux}}$ reaches around 5,000, indicating that the attack performance saturates beyond this point.
717
+
718
+ Table 11: Impact of refinement on forgery performance.
719
+
720
+ <table><tr><td rowspan="2">Watermark Scheme</td><td colspan="2">PSNR ↑</td><td colspan="2">Forged Bit-acc. ↑</td><td colspan="2">FPR@10-6↑</td></tr><tr><td>W/o Ref.</td><td>W/ Ref.</td><td>W/o Ref.</td><td>W/ Ref.</td><td>W/o Ref.</td><td>W/ Ref.</td></tr><tr><td>DWT-DCT</td><td>32.40</td><td>33.77</td><td>63.03%</td><td>89.62%</td><td>16.00%</td><td>57.00%</td></tr><tr><td>HiddeN</td><td>29.81</td><td>32.79</td><td>80.60%</td><td>99.40%</td><td>89.00%</td><td>94.00%</td></tr><tr><td>RivaGAN</td><td>31.89</td><td>34.03</td><td>89.90%</td><td>95.90%</td><td>84.00%</td><td>96.00%</td></tr><tr><td>StableSignature</td><td>25.60</td><td>31.27</td><td>97.58%</td><td>98.19%</td><td>91.00%</td><td>98.00%</td></tr></table>
721
+
722
+ Table 12: Performance comparison across datasets with different sizes of ${D}_{aux}$
723
+
724
+ <table><tr><td rowspan="2">Dataset</td><td colspan="3">1000</td><td colspan="3">5000</td><td colspan="3">10000</td></tr><tr><td>PSNR↑</td><td>Forged Bit-acc.↑</td><td>FPR@10-6↑</td><td>PSNR↑</td><td>Forged Bit-acc.↑</td><td>FPR@10-6↑</td><td>PSNR↑</td><td>Forged Bit-acc.↑</td><td>FPR@10-6↑</td></tr><tr><td>MS-COCO</td><td>34.16</td><td>81.82%</td><td>80.70%</td><td>34.07</td><td>95.74%</td><td>96.40%</td><td>34.47</td><td>97.81%</td><td>96.30%</td></tr><tr><td>CelebA-HQ</td><td>35.74</td><td>89.10%</td><td>89.50%</td><td>35.28</td><td>98.61%</td><td>99.10%</td><td>35.25</td><td>98.63%</td><td>98.50%</td></tr><tr><td>ImageNet</td><td>34.10</td><td>81.25%</td><td>71.50%</td><td>33.87</td><td>93.83%</td><td>94.90%</td><td>34.29</td><td>93.53%</td><td>95.80%</td></tr><tr><td>DiffusionDB</td><td>34.77</td><td>74.76%</td><td>64.10%</td><td>34.50</td><td>90.43%</td><td>91.20%</td><td>34.96</td><td>91.70%</td><td>93.60%</td></tr></table>
725
+
726
+ # F Training Details of the Diffusion Model
727
+
728
+ We adopt a standard DDIM framework for training, following the official Hugging Face tutorial<sup>4</sup>. The model is trained for 20,000 iterations with a batch size of 256 and a learning rate of $1 \times 10^{-4}$ . The entire training process takes roughly 40 A100 GPU hours. To support different watermarking schemes, we only adjust the input resolution of the model to match the input dimensions for each watermark. Other training settings and model configurations remain unchanged. Although the current training setup suffices for watermark forgery, enhancing the model's ability to better capture the watermark signal is left for future work. For our primary experiments, we train an unconditional diffusion model from scratch using 5,000 watermarked images. Due to the limited amount of training data, the diffusion model demonstrates memorization [18], resulting in reduced sample diversity, as illustrated in Figure 9. All of the experiments are conducted on an NVIDIA A100 GPU.
729
+
730
+ # G Limitation
731
+
732
+ In this section, we discuss the limitations of our attack. While our current training paradigm already achieves effective watermark forgery, we have not yet systematically explored how to guide diffusion models better to capture the underlying watermark distribution. In this work, we employ a standard diffusion architecture without any specialized training strategies. We leave the exploration of alternative architectures and training schemes to future work. Moreover, understanding why different watermark types exhibit varying forgery and learning behaviors remains an open problem. Additionally, our method requires a substantial amount of data and incurs training costs.
733
+
734
+ # H Broader Impact
735
+
736
+ Invisible watermarking plays a critical role in detecting and holding accountable AI-generated content, making it a solution of significant societal importance. Our research introduces a novel
737
+
738
+ watermark forgery attack, revealing the vulnerabilities of current watermarking schemes to such attacks. Although our work involves the watermarking system deployed by Amazon, as responsible researchers, we have worked closely with Amazon's Responsible AI team to develop a solution, which has now been deployed. The Amazon Responsible AI team has issued the following statement:
739
+
740
+ 'On March 28, 2025, we released an update that improves the watermark detection robustness of our image generation foundation models (Titan Image Generator and Amazon Nova Canvas). With this change, we have maintained our existing watermark detection accuracy. No customer action is required. We appreciate the researchers from the State Key Laboratory of Blockchain and Data Security at Zhejiang University for reporting this issue and collaborating with us.'
741
+
742
+ While our study highlights the potential risks of existing watermarking systems, we believe it plays a positive role in the early stages of their deployment. By providing valuable insights for improving current technologies, our work contributes to enhancing the security and robustness of watermarking systems, ultimately fostering more reliable solutions with a positive societal impact.
743
+
744
+ ![](images/3f781db502af3da8ab16fbf8e79c2c2932a585304c0fe074fbf955f15d713ad7.jpg)
745
+ Figure 9: Generated images from diffusion models trained on 5,000 watermarked images
746
+
747
+ # I Forged Samples
748
+
749
+ ![](images/65acd98ce4334b01fd9cb37de9adaf661e023da200ea6cc8134a98a1e6828c2d.jpg)
750
+ Figure 10: Examples of forged Amazon watermark samples on the DiffusionDB
751
+
752
+ ![](images/035d2312a74ee3ba42fa086bfe21662238d8641c081744493f31faa3159f4108.jpg)
753
+ Figure 11: Examples of forged Amazon watermark samples on the MS-COCO
754
+
755
+ ![](images/57c46d32626b7bc2d055f0dd5e3aa3659c64357a25353a6815383c2ba843742a.jpg)
756
+ Figure 12: Examples of forged Amazon watermark samples on the CelebA-HQ
757
+
758
+ ![](images/62aedef55880162b97cefaa6de660b96b188ecffd6c4d03f49fcfef22e5d93c2.jpg)
759
+ Figure 13: Examples of forged Amazon watermark samples on the ImageNet
NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:822d351a242d40c5d6b9f53587ae530afae6b757ae139924ff33de7f436955ff
3
+ size 2452650
NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:206e9597c1662e6a35799a333e8dfae71b5587f19acc765982660eeb6d282072
3
+ size 855106
NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/b6141afb-6183-4c95-9870-399c132ba26a_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ffd41259034223e3d98b32f26b3144b603703dce1fd6940407a551159c75215
3
+ size 151381
NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/b6141afb-6183-4c95-9870-399c132ba26a_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e38450a3b5ac2941882d23a43953784aa7f673f6562b6e89d9f0b25276752c4
3
+ size 191097
NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/b6141afb-6183-4c95-9870-399c132ba26a_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9f32d2488df2161384557c577f5151b3b681995f6f41032e0970f3fd73c3648
3
+ size 21713358
NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/full.md ADDED
@@ -0,0 +1,800 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WaLRUS: Wavelets for Long-range Representation Using SSMs
2
+
3
+ Hossein Babaei Mel White Sina Alemohammad Richard G. Baraniuk
4
+
5
+ Department of Electrical and Computer Engineering, Rice University {hb26,mel.white,sa86,richb} $@$ rice.edu
6
+
7
+ # Abstract
8
+
9
+ State-Space Models (SSMs) have proven to be powerful tools for modeling long-range dependencies in sequential data. While the recent method known as HiPPO has demonstrated strong performance, and formed the basis for machine learning models S4 and Mamba, it remains limited by its reliance on closed-form solutions for a few specific, well-behaved bases. The SaFARi framework generalized this approach, enabling the construction of SSMs from arbitrary frames, including non-orthogonal and redundant ones, thus allowing an infinite diversity of possible "species" within the SSM family. In this paper, we introduce WaLRUS (Wavelets for Long-range Representation Using SSMs). We compare WaLRUS to HiPPO-based models, and demonstrate improved accuracy and more efficient implementations for online function approximation tasks.
10
+
11
+ # 1 Introduction
12
+
13
+ Sequential data is foundational to many machine learning tasks, including natural language processing, speech recognition, and video understanding [1-3]. These applications require models that can effectively process and retain information over long time horizons. A central challenge in this setting is the efficient representation of long-range dependencies in a way that preserves essential features of the input signal for downstream tasks, while remaining computationally tractable during both training and inference [4].
14
+
15
+ Recurrent neural networks (RNNs) are traditional choices for modeling sequential data, but struggle with long-term dependencies due to vanishing or exploding gradients during backpropagation through time [4-6]. While gated variants like LSTMs [7] and GRUs [8] mitigate some issues, they require significant tuning and lack compatibility with parallel processing, hindering scalability.
16
+
17
+ State-space models (SSMs) offer a linear and principled framework for encoding temporal information, and have re-emerged as a powerful alternative for online representation of sequential data [9-16]. By design, they enable the online computation of compressive representations that summarize the entire input history using a fixed-size state vector, ensuring a constant memory footprint regardless of sequence length. A major breakthrough came with HiPPO (High-order Polynomial Projection Operators), which reformulates online representation as a function approximation problem using orthogonal polynomial bases [9]. This approach underpins state-of-the-art models like S4 and Mamba, enabling compact representations for long-range dependencies [10, 11].
18
+
19
+ However, existing SSMs primarily rely on Legendre and Fourier bases, which, although effective for smooth or periodic signals, struggle with non-stationary and localized features [9, 10]. These challenges are especially evident in domains such as audio, geophysics, and biomedical signal processing, where rapid transitions and sparse structure are common.
20
+
21
+ To address this limitation, the SaFARi framework (State-Space Models for Frame-Agnostic Representation) extends HiPPO to arbitrary frames, including non-orthogonal and redundant bases [13, 14, 17].
22
+
23
+ ![](images/87c2926531ee1503b6170a1892a8472b140aec86b6ca2042ad38be14d5633714.jpg)
24
+ Figure 1: An input signal comprising three random spikes is sequentially processed by SSMs and reconstructed after observing the entire input. Only the wavelet-based SSM constructed using WaLRUS can clearly distinguish adjacent spikes.
25
+
26
+ This generalization enables SSM construction from any frame via numerical solutions of first-order linear differential equations, preserving HiPPO's memory efficiency and update capabilities without closed-form restrictions.
27
+
28
+ In this paper, we leverage the SaFARi method with wavelet frames to introduce a new model, WaLRUS (Wavelets for Long-range Representation Using SSMs). We derive our model using Daubechies wavelets with two variants: scaled-WaLRUS and translated-WaLRUS, designed for capturing non-smooth and localized features through compactly supported, multi-resolution wavelet decompositions [18]. These properties allow WaLRUS to retain fine-grained signal details typically lost by polynomial-based models.
29
+
30
+ We also provide a comparative analysis of WaLRUS and existing HiPPO variants (see Fig. 1). Empirical results demonstrate that the wavelet-based WaLRUS model consistently outperforms Legendre and Fourier-based HiPPO models in reconstruction accuracy, especially on signals with sharp transients. Furthermore, WaLRUS has been experimentally observed to be stably diagonalizable, which is the key enabler of efficient convolution-based implementations and parallel computation [13, 14].
31
+
32
+ These results highlight the practical advantages of WaLRUS models, particularly in scenarios where signal structure varies across time and scale. By bridging multiscale signal analysis and online function approximation, WaLRUS opens new directions for modeling complex temporal phenomena across disciplines.
33
+
34
+ # 2 Background
35
+
36
+ Recent advances in machine learning, computer vision, and large language models have pushed the frontier of learning from long sequences of data. These applications demand models that can (1) generate compact representations of input streams, (2) preserve long-range dependencies, and (3) support efficient online updates.
37
+
38
+ Classical linear methods, such as the Fourier transform, offer compact representations in the frequency domain [19-23]. However, they are ill-suited for online processing: each new input requires recomputing the entire representation, making them inefficient for streaming data and limited in their memory horizon. Nonlinear models like recurrent neural networks (RNNs) and their gated variants (LSTMs, GRUs) have been more successful in sequence modeling, but they face well-known issues such as vanishing/exploding gradients and limited parallelization [4-6, 8]. Moreover, their representations are task-specific, and not easily repurposed across different settings.
39
+
40
+ To resolve these issues, the HiPPO framework [9] casts online function approximation as a continuous projection of the input $u(t)$ onto a linear combination of the given basis functions $\mathcal{G}$ . At every time $T$ , it produces a compressed state vector $\vec{c}(T)$ that satisfies the update rule:
41
+
42
+ $$
43
+ \frac {d}{d T} \vec {c} (T) = - A _ {(T)} \vec {c} (T) + B _ {(T)} u (T). \tag {1}
44
+ $$
45
+
46
+ Here, $A_{(T)}$ and $B_{(T)}$ are derived based on the choice of polynomial basis and measure $\mu(t)$ , which defines how recent history is weighted. Two commonly used measures are:
47
+
48
+ $$
49
+ \mu_ {t r} (t) = \frac {1}{\theta} \mathbb {1} _ {t \in [ T - \theta , T ]}, \quad \mu_ {s c} (t) = \frac {1}{T} \mathbb {1} _ {t \in [ 0, T ]}. \tag {2}
50
+ $$
51
+
52
+ The translated measure $\mu_{tr}$ emphasizes recent history within a sliding window of length $\theta$ , while the scaled measure $\mu_{sc}$ compresses the entire input history into a fixed-length representation.
53
+
54
+ Despite its strengths, HiPPO is restricted to only a few bases (e.g., Legendre, Fourier), and deriving $A(t)$ and $B(t)$ in closed form is only tractable for specific basis-measure combinations.
55
+
56
+ SaFARi addressed this limitation by generalizing online function approximation to any arbitrary frame [17]. A frame $\Phi(t)$ is a set of elements $\{\phi_i(t)\}$ such that one can reconstruct any input $g(t)$ by knowing the inner products $\langle g(t), \phi_i(t) \rangle$ . For a given frame $\Phi$ , its complex conjugate $\overline{\Phi}$ , and its dual $\widetilde{\Phi}$ , the scaled-SaFARi produces an SSM with $A$ and $B$ given by:
57
+
58
+ $$
59
+ \frac {\partial}{\partial T} \vec {c} (T) = - \frac {1}{T} A \vec {c} (T) + \frac {1}{T} B u (T), \quad A _ {i, j} = \delta_ {i, j} + \int_ {0} ^ {1} t ^ {\prime} \left. \frac {\partial}{\partial t} \overline {{\phi}} _ {i} \right| _ {t = t ^ {\prime}} \widetilde {\phi} _ {j} \left(t ^ {\prime}\right) d t ^ {\prime}, \quad B _ {i} = \overline {{\phi}} _ {i} (1) \tag {3}
60
+ $$
61
+
62
+ while the translated-SaFARi produces an SSM with the $A$ and $B$ given by:
63
+
64
+ $$
65
+ \frac {\partial}{\partial T} \vec {c} (T) = - \frac {1}{\theta} A \vec {c} (T) + \frac {1}{\theta} B u (T), \quad A _ {i, j} = \bar {\phi} _ {i} (0) \tilde {\phi} _ {j} (0) + \int_ {0} ^ {1} \frac {\partial}{\partial t} \bar {\phi} _ {i} \Bigg | _ {t = t ^ {\prime}} \tilde {\phi} _ {j} \left(t ^ {\prime}\right) d t ^ {\prime}, B _ {i} = \bar {\phi} _ {i} (1) \tag {4}
66
+ $$
67
+
68
+ In the appendix, we provide a some theoretical background on Eq. 3 and Eq. 4 from [17].
69
+
70
+ Incremental update of SSMs: The differential equation in Eq. 1 can be solved incrementally. Following [9], we adopt the Generalized Bilinear Transform (GBT) [24] given by Eq. 5 for its superior numerical accuracy in first order SSMs.
71
+
72
+ $$
73
+ c (t + \Delta t) = \left(I + \delta t \alpha A _ {t + \delta t}\right) ^ {- 1} \left[ \left(I - \delta t (1 - \alpha) A _ {t}\right) c (t) + \delta t B (t) u (t) \right] \tag {5}
74
+ $$
75
+
76
+ Diagonalization of $A$ : Each GBT step involves matrix inversion and multiplication. If $A(t)$ has time-independent eigenvectors (e.g., $A(t) = g(t)A$ ), it can be diagonalized as $A(t) = V\Lambda(t)V^{-1}$ , allowing a change of variables $\widetilde{c} = V^{-1}c$ and $\widetilde{B} = V^{-1}B(t)$ , yielding:
77
+
78
+ $$
79
+ \frac {\partial}{\partial t} \widetilde {c} = - \Lambda (t) \widetilde {c} + \widetilde {B} u (t), \tag {6}
80
+ $$
81
+
82
+ This reduces each update to elementwise operations, significantly lowering computational cost.
83
+
84
+ # 2.1 Wavelet Frames
85
+
86
+ Wavelet frames offer a multiresolution analysis that captures both temporal and frequency characteristics of signals, making them particularly effective for representing non-stationary or long-range dependent data [25]. Initiated by [26] and formalized by [27], wavelet theory gained prominence with Ingrid Daubechies' seminal work [28], which introduced compactly supported orthogonal wavelets. Since then, wavelets have played a central role in modern signal processing [29].
87
+
88
+ Wavelet analysis decomposes a signal $f(t)$ into dilations and translations of a mother wavelet $\psi(t)$ , enabling simultaneous localization in time and frequency. The discrete wavelet transform is
89
+
90
+ $$
91
+ W (j, k) = \int_ {- \infty} ^ {\infty} f (t) \psi_ {j, k} ^ {*} (t) d t, \quad \psi_ {j, k} (t) = \frac {1}{\sqrt {2 ^ {- j}}} \psi \left(\frac {t - k}{2 ^ {- j}}\right).
92
+ $$
93
+
94
+ Unlike global bases such as Fourier or polynomials, which struggle with localized discontinuities, wavelets provide sparse representations of signals with singularities, such as jumps or spikes [18, 30]. Their local support yields small coefficients in smooth regions and large coefficients near singularities, enabling efficient compression and accurate reconstruction. These properties make wavelet frames a natural and powerful choice for time-frequency analysis in a wide range of practical applications.
95
+
96
+ ![](images/16d7e38b6091cf3b9aaadd4fc8de52bf7269bb4cb212d7134f9167bd3c01c154.jpg)
97
+ Figure 2: A diagram of the relationships between HiPPO, SaFARi, WaLRUS (this work), and SSM-based models such as S4 and Mamba. The focus of this work is on the development of a wavelet-based SSM in a function approximation task, which could later be used as a drop-in replacement for the SSM layer in a learned model.
98
+
99
+ ![](images/c105889c3730afe65076618496a369536119b37dc10ad9fb9b110f5a32be02c2.jpg)
100
+ Figure 3: Left: Elements of a Daubechies-22 wavelet frame, with father wavelet $\phi$ , mother wavelet $\psi$ , and two scales. Right: The scaled and translated $A$ matrices for WaLRUS with $N = 21$ .
101
+
102
+ ![](images/8942678cf0ff0e7ab2b70bf9931446c5c1ce35ba3dd338191f98d73a0f8f0f1e.jpg)
103
+
104
+ ![](images/6d2b8624bfeb8af27c63b060c29bb457a628114901659ac8bbc325b67655d929.jpg)
105
+
106
+ # 3 WaLRUS: Wavelet-based SSMs
107
+
108
+ Daubechies wavelets [18, 28] provide a particularly useful implementation of a SaFARi SSM. While there are different types of commonly used wavelets, Daubechies wavelets are of particular interest in signal representation due to their maximal vanishing moments over compact support.
109
+
110
+ To construct the frame, we use the usual dyadic scaling for multiresolution analysis; that is, scaling the mother wavelets by a factor of two at each level. For each scale, different shifts along the x-axis are introduced. Compressive wavelet frames are truncated versions of wavelet frames that contain only a few of the coarser scales, and introduce overlapping shifts to keep the expressivity and satisfy the frame condition (See Mallat, [29]). The interplay between the retained scales and the minimum required overlap to maintain the expressivity is extensively studied in the wavelet literature [18, 28, 29]. If there is excess overlap in shifts, the wavelet frame becomes redundant, and redundancy has advantages in expressivity and robustness to noise.
111
+
112
+ Figure 3, left, gives a visual representation of how we construct such a frame. The frame consists of shifted copies of the father wavelet $\phi$ at one scale, and shifted copies of a mother wavelet $\psi$ at different scales, with overlaps that introduce redundancy. Figure. 3, right, shows the resulting $A$ matrices for the scaled and translated WaLRUS. $^{1}$
113
+
114
+ Some recent works [31, 32] has conceptually connected the use of wavelets and SSM-based models (namely Mamba). These efforts are fundamentally distinct from ours in that they perform a multiresolution analysis on the input to the model only. No change is made to the standard Mamba SSM layer.
115
+
116
+ This work, on the other hand, is the first to challenge the ubiquity of the Legendre-based SSM, and present alternative wavelet-based machinery for the core of powerful models like Mamba. WaLRUS could be used as a drop-in replacement for any existing SSM-based framework. However, before simply substituting a part in a larger system, we must first justify how and why a different SSM can improve performance. This paper presents a tool that stands alone as an online function approximator, and also provides a foundational building block for future integration in SSM-based models.
117
+
118
+ # 3.1 Redundancy of the wavelet frame and size of the SSM
119
+
120
+ In contrast to orthonormal bases, redundant frames allow more than one way to represent the same signal. This redundancy arises from the non-trivial null space of the associated frame operator, meaning that multiple coefficient vectors can yield the same reconstructed function. Although the representation is not unique, it is still perfectly valid, and this flexibility offers several key advantages in signal processing. In particular, redundancy can improve robustness to noise, enable better sparsity for certain signal classes, and enhance numerical stability in inverse problems [33-35].
121
+
122
+ We distinguish between the total number of frame elements $N_{\mathrm{full}}$ and the effective dimensionality $N_{\mathrm{eff}}$ of the subspace where the meaningful representations reside. In other words, while the frame may consist of $N_{\mathrm{full}}$ vectors, the actual information content lies in a lower-dimensional subspace of size $N_{\mathrm{eff}}$ . This effective dimensionality can be quantified by analyzing the singular-value spectrum of the frame operator [29, 33].
123
+
124
+ For the WaLRUS SSMs described in this work, we first derive $A_{N_{\mathrm{full}}}$ using all elements of the redundant frame. We then diagonalize $A$ and reduce it to a size of $N_{\mathrm{eff}}$ . This ensures that different frame choices, whether orthonormal or redundant, can be fairly and meaningfully compared in terms of computational cost, memory usage, and approximation accuracy. The exact relationship between the wavelet frame and the resulting $N_{\mathrm{eff}}$ of the $A$ matrix depends not only on the overlap of the shifts in the frame, but also on the type (and order) of chosen wavelet, and number of scales. Determining the "optimal" overlap or $N_{\mathrm{eff}}$ is application-specific and an area for future research.
125
+
126
+ # 3.2 Computational complexity of WaLRUS
127
+
128
+ For a sequence of length $L$ , scaled-SaFARi has $O(N^3 L)$ complexity due to solving an $N$ -dimensional linear system at each step, while translated-SaFARi can reuse matrix inverses, and thus has $O(N^2 L)$ complexity, assuming no diagonalization [17]. When the state matrix $A$ is diagonalizable, the complexity reduces to $O(NL)$ and can further accelerate to $O(L)$ with parallel processing on independent scalar SSMs.
129
+
130
+ We observe that each of the scaled and translated WaLRUS SSMs we implemented, regardless of dimension, were stably diagonalizable. Further research is required to determine whether Daubechies wavelets will always yield diagonalizable SSMs. Legendre-based SSMs, on the other hand, are not stably diagonalizable [9]. Although [9] proposed a fast sequential HiPPO-LegS update to achieve $O(NL)$ complexity, [17] showed that it cannot be parallelized to $O(L)$ . Moreover, no efficient sequential update exists for HiPPO-LegT, leaving Legendre-based SSMs at a disadvantage during inference when sequential updates are needed.
131
+
132
+ As sequence length increases, step-wise updates become a bottleneck, especially during training when the entire sequence is available upfront. This can be mitigated by using convolution kernels instead of sequential updates. Precomputing the convolution kernel and applying it via convolution accelerates computation, leveraging GPU-based parallelism to achieve $O(\log L)$ run-time complexity for diagonalizable SSMs. This optimization is feasible for both WaLRUS and Fourier-based SSMs. Although Legendre-based SSMs can attain similar asymptotic complexity through structured algorithms [10, 12], their nondiagonal nature prevents decoupling into $N$ independent SSMs.
133
+
134
+ # 3.3 Representation errors in the translated WaLRUS
135
+
136
+ Truncated representations in SSMs inevitably introduce errors, as discarding higher-order components limits reconstruction fidelity [17]. SaFARi only investigated these errors for scaled SSMs, leaving their approximation accuracy unquantified. Visualizing the convolution kernels generated by different SSMs offers some insight into the varying performance of different SSMs on the function approximation task. An "ideal" kernel would include a faithful representation for each element of the basis or frame from $T = 0$ to $T = W$ , where $W$ is the window width, and it would contain no non-zero elements between $W$ and $L$ . However, certain bases generate kernels with warping issues, as illustrated in Fig. 4.
137
+
138
+ The HiPPO-LegT kernel loses coefficients due to warping within the desired translating window (see areas B and C of Fig. 4). For higher degrees of Legendre polynomials, the kernel exhibits an all-zero region at the beginning and end of the sliding window. This implies that high-frequency information in the input is not captured at the start or end of the sliding window, and the extent of this dead zone
139
+
140
+ ![](images/15272d301672268502b658ed63242cfa562c7e622af2b61457726a75843549b9.jpg)
141
+ Figure 4: The kernel generated by HiPPO-LegT with window size $W = 2000$ and representation size $N = 500$ . Three key non-ideal aspects of the kernel are noticeable. A) poor localization due to substantial non-zero values outside $W$ , B) coefficient loss from at bottom left of the kernel, and C) coefficient loss at the bottom right of the kernel for $t \in (1500, 2000)$ .
142
+
143
+ ![](images/aaa9f5bc60d1d6df57822df5bf1f5c2e707bc68d569863e3168dd8edbee9e08c.jpg)
144
+ Figure 5: Left: The ideal kernels, which yield zero representation error, are shown for Translated-WaLRUS (using the D22 wavelet), HiPPO-LegT, and HiPPO-FouT. Right: The corresponding kernels generated by the translated models are presented for comparison. WaveT has superior localization within the window of interest compared to HiPPO-LegT and HiPPO-FouT.
145
+
146
+ increases with higher frequencies. The translated Fourier kernel primarily suffers from the opposite problem: substantial nonzero elements outside the kernel window indicate that LegT struggles to effectively "forget" historical input values. Thus contributions from input signals outside the sliding window appear as representation errors. LegT also has this problem, to a lesser extent—see area A of Fig. 4 for a closer view of the kernel.
147
+
148
+ A visual inspection of Fig. 5 reveals that the translated-WaLRUS kernel closely matches the idealized version, whereas both FouT and LegT exhibit significant errors in their computed kernels. We emphasize that the issues observed with LegT and FouT arise from inherent limitations of the underlying SSMs themselves and are not due to the choice of input signal classes.
149
+
150
+ # 4 Experiments
151
+
152
+ The following section deploys the WaLRUS SSM on synthetic and real signals for the task of function approximation, comparing its performance with extant models in the literature. We will evaluate performance in MSE as well as their ability to track important signal features like singularities, and show that using WaLRUS can have an edge over the state-of-the-art polynomial-based SSMs.
153
+
154
+ To benchmark WaLRUS against state-of-the-art SSMs, we implement two variants: Scaled-WaLRUS and Translated-WaLRUS, which we will call WaveS and WaveT respectively, following HiPPO's convention. These models are compared against the top-performing HiPPO-based SSMs. Further details on the wavelet frames used in each experiment are provided in Appendix A.2.4, and code can be found at https://github.com/echbaba/walrus.
155
+
156
+ We conduct experiments on the following datasets:
157
+
158
+ ![](images/cdb33d9e37e40eb951242d2c7bbb7272f08384e4df85f757f22becd7b910334f.jpg)
159
+ Figure 6: Comparing reconstruction MSE between WaveS, LegS, and FouS. Error bars represent the first and third quantile of MSE. WaveS produces the lowest MSE in each dataset.
160
+
161
+ <table><tr><td>Dataset</td><td>LegS</td><td>FouS</td><td>WaveS</td></tr><tr><td>M4</td><td>0%</td><td>0.47%</td><td>99.53%</td></tr><tr><td>Speech</td><td>4.25%</td><td>0%</td><td>95.75%</td></tr><tr><td>Blocks</td><td>0%</td><td>0%</td><td>100%</td></tr><tr><td>Spikes</td><td>0%</td><td>0%</td><td>100%</td></tr><tr><td>Bumps</td><td>0%</td><td>0%</td><td>100%</td></tr><tr><td>Piecepoly</td><td>1.00%</td><td>0%</td><td>99.00%</td></tr></table>
162
+
163
+ Table 1: Percent of tests where each basis had the lowest overall MSE.
164
+
165
+ M4 Forecasting Competition [36]: A diverse collection of univariate time series with varying sampling frequencies taken from domains such as demographic, finance, industry, macro, micro, etc.
166
+
167
+ Speech Commands [37]: A dataset of one-second audio clips featuring spoken English words from a small vocabulary, designed for benchmarking lightweight audio recognition models.
168
+
169
+ Wavelet Benchmark Collection [38]: A synthetic benchmark featuring signals with distinct singularity structures, such as Bumps, Blocks, Spikes, and Piecewise Polynomials. We generate randomized examples from each class, with further details and visualizations provided in Appendix A.2.2.
170
+
171
+ # 4.1 Comparisons among frames
172
+
173
+ We note that no frame is universally optimal for all input classes, as different classes of input signals exhibit varying decay rates in representation error. However, due to the superior localization and near-optimal error decay rate of wavelet frames, wavelet-based SSMs consistently show an advantage over Legendre and Fourier-based SSMs across a range of real-world and synthetic signals. These experiments position WaLRUS as a powerful and adaptable approach for scalable, high-fidelity signal representation.
174
+
175
+ # 4.1.1 Experimental setup
176
+
177
+ The performance of SSMs in online function approximation can be evaluated several ways. One metric is the mean squared error (MSE) of the reconstructed signal compared to the original. In the following sections, we compare the overall MSE for SSMs with a scaled measure, and the running MSE for SSMs with a translated measure.
178
+
179
+ Additionally, in some applications, the ability to capture specific features of a signal may be of greater interest than the overall MSE. As an extreme case, consider a signal that is nearly always zero, but contains a few isolated spikes. If our estimated signal is all zero, then the MSE will be small, but all of the information of interest has been lost.
180
+
181
+ In all the experiments, we use equal SSM sizes $N_{\mathrm{eff}}$ , as described in Sec. 3.1.
182
+
183
+ # 4.1.2 Function approximation with the scaled measure
184
+
185
+ In this experiment, we construct Scaled-WaLRUS, HiPPO-LegS, and HiPPO-FouS with equal effective sizes (see Appendix A.2.4). Frame sizes are empirically selected to balance computational cost and approximation error across datasets.
186
+
187
+ Fig. 6 shows the average MSE across random instances of multiple datasets. Not only is the average MSE lowest for WaLRUS for all datasets, but even where there is high variance in the MSE, all methods tend to keep the same relative performance. That is, the overlap in the error bars in Fig. 6 does not imply that the methods are indistinguishable; rather, for a given instance of a dataset, the MSE across all three SSM types tends to shift together, maintaining the MSE ordering WaveS <
188
+
189
+ <table><tr><td rowspan="2"></td><td rowspan="2">Dataset: Basis/Frame:</td><td colspan="3">Spikes</td><td colspan="3">Bumps</td></tr><tr><td>Legendre</td><td>Fourier</td><td>Wavelets</td><td>Legendre</td><td>Fourier</td><td>Wavelets</td></tr><tr><td rowspan="5">Scaled</td><td>Peaks missed</td><td>2.5%</td><td>0.62%</td><td>0%</td><td>0.29%</td><td>0.30%</td><td>0%</td></tr><tr><td>False peaks</td><td>1.6%</td><td>1.6%</td><td>0.01%</td><td>0.3%</td><td>1.9%</td><td>0%</td></tr><tr><td>Instance-wise wins</td><td>76%</td><td>92.9%</td><td>100%</td><td>97.1%</td><td>96.9%</td><td>100%</td></tr><tr><td>Relative amplitude error</td><td>16.2%</td><td>11.8%</td><td>5.5%</td><td>12.4%</td><td>16.2%</td><td>6.5%</td></tr><tr><td>Average displacement</td><td>18.8</td><td>32.0</td><td>10.0</td><td>12.7</td><td>33.7</td><td>7.1</td></tr><tr><td rowspan="5">Translated</td><td>Peaks missed</td><td>6.4%</td><td>13.0%</td><td>0.27%</td><td>1.12%</td><td>29.76%</td><td>0.08%</td></tr><tr><td>False peaks</td><td>1.1%</td><td>0.05%</td><td>0.22%</td><td>0.43%</td><td>0.28%</td><td>0.20%</td></tr><tr><td>Instance-wise wins</td><td>36.9%</td><td>13.65%</td><td>99.95%</td><td>85.1%</td><td>0.2%</td><td>100%</td></tr><tr><td>Relative amplitude error</td><td>19.6%</td><td>28.4%</td><td>3.5%</td><td>6.9%</td><td>28.4%</td><td>2.5%</td></tr><tr><td>Average displacement</td><td>6.0</td><td>5.4</td><td>4.3</td><td>5.5</td><td>5.8</td><td>4.8</td></tr></table>
190
+
191
+ Table 2: Performance comparison of WaLRUS-Wavelets, HiPPO-Legendre, and HiPPO-Fourier for peak detection with the translated measure. WaLRUS shows a significant advantage in successfully remembering singularities over HiPPO SSMs.
192
+
193
+ $\mathrm{LegS} < \mathrm{FouS}$ . To highlight this result, the percentage of instances where each SSM had the best performance is also provided in Table 1.
194
+
195
+ The representative power of WaLRUS is attributed to its ability to minimize truncation and mixing errors by selecting frames that capture signal characteristics with higher fidelity. See [17] for further details.
196
+
197
+ # 4.1.3 Peak detection with the scaled measure
198
+
199
+ In this experiment, we aim to detect the locations of random spikes in input sequences using Scaled-WaLRUS, FouS, and LegS, all constructed with an equal sizes. We generate random spike sequences, add Gaussian noise $(\mathrm{SNR} = 0.001)$ , and compute their representations with Daubechies wavelets, Legendre polynomials, and Fourier series. The reconstructed signals are transformed into wavelet coefficients, and spike locations are identified following the method in [30].
200
+
201
+ To evaluate performance, we compare the relative amplitude and displacement of detected spikes with their ground truth (see Fig.7). This process is repeated for 1000 random sequences, each containing 10 spikes. Table 2 summarizes the average number of undetected spikes for each SSM and the instance-wise win percentage, representing the number of instances where each SSM had fewer or equal misses peaks than the other SSMs. Note that these percentages do not sum to 100, as some instances result in identical spike detection across all models.
202
+
203
+ As shown in Table 2, WaveS misses significantly fewer spikes than FouS and LegS, with lower displacement errors and reduced amplitude loss. Figure 1 illustrates an example where WaLRUS successfully captures closely spaced spikes that are missed by LegS and FouS, demonstrating its superior time resolution.
204
+
205
+ ![](images/4f1def6e8966af4f3fcc96319bbdfa41dd3c068cfe652f9a0e90af14e012f14a.jpg)
206
+ Figure 7: Illustration of the metrics to evaluate performance of SSMs on different datasets in Table 2.
207
+
208
+ ![](images/4e775dc52e2e4e464c9e6d1fc244de04bfc9169608bf3123ae892d14e49dcf0a.jpg)
209
+
210
+ ![](images/a1adf01ad3f3db2275d4937a9d6b9cab502dc0c2375a7d15f299957a084e448c.jpg)
211
+
212
+ ![](images/a4cea581f419a380000ec595c0cd7a8e5a8fed996f9913d86ccb05fb4b775e18.jpg)
213
+
214
+ Figure 8: For each dataset, the median and (0.4, 0.6) quantile of running reconstruction MSE across different instances is demonstrated in different colors for WaveT, LegT, and FouT. WaveT captures information in the input signals with a higher fidelity than LegT and FouT.
215
+ ![](images/42f6e23f21fee0d3b7448f23fc51fa17c128fbaadc3cb84d509d829f9a3b0a67.jpg)
216
+ Basis: —— WaveT —— LegT —— FouT
217
+
218
+ ![](images/95b138a41c7d66f37138beb9d794e2932dd6eda78d0e9921703170b93d0ca596.jpg)
219
+
220
+ ![](images/995f1fc66e18e87dcc649613b14f3c845eff4a23adc1324bb130956a5a38512e.jpg)
221
+
222
+ # 4.1.4 Function approximation with the translated measure
223
+
224
+ In this experiment, we construct WaveT, LegT, and FouT SSMs, all with equal effective sizes (see Appendix A.2.4). The chosen effective sizes are smaller than those we used for the scaled measure since the translated window contains lower frequency content within each window, making it possible to reconstruct the signal with smaller frames. Then, for each instance of input signal, the reconstruction MSE at each time step is calculated and plotted in Fig. 8.
225
+
226
+ For each input signal instance, we compute the running MSE at each time step, as shown in Fig. 8. This plot represents how the MSE evolves over time across multiple instances, providing a comparison of running MSEs for each SSM. The results demonstrate that Translated-WaLRUS consistently achieves slightly better fidelity than LegT and significantly outperforms FouT across all datasets.
227
+
228
+ As discussed in Section 3.3, the reconstruction error stems from two main factors: (1) non-idealities in the translated SSM kernel, affecting its ability to retain relevant information within the window while effectively forgetting data outside it (see Fig. 4), and (2) the extent to which these fundamental non-idealities are activated by the input signal. For example, signals with large regions of zero values are less impacted by kernel inaccuracies, as the weights outside the kernel contribute minimally to reconstruction.
229
+
230
+ WaveT achieves a modest, and in some cases negligible MSE improvement over LegT (e.g., M4 and Blocks). However, the kernel-based limitations highlighted in Section 3.3 may have a more pronounced effect on longer sequences or different datasets.
231
+
232
+ # 4.1.5 Peak detection with the translated measure
233
+
234
+ In this experiment, we evaluate the ability of WaveT, FouT, and LegT to retain information about singularities in signals, following the setup in Section 4.1.3, but with a translated SSM. We generate 2,000 random sequences, each containing 20 spikes. The average number of undetected spikes for each SSM, along with instance-wise win percentages, is reported in Table 2. As in the scaled measure experiment, the percentages do not sum to 100 due to ties across SSMs. Table 2 shows that WaveT consistently outperforms FouT and LegT, with fewer missed peaks, reduced displacement, and less amplitude loss.
235
+
236
+ # 5 Limitations
237
+
238
+ In this work we have implemented only one type of wavelet (Daubechies-22), as our purpose is to introduce practical and theoretical reasons to replace polynomial SSMs with wavelet SSMs. Other wavelets (biorthogonal, coiflets, Morlets, etc.) could also be used, with some caveats. First, we require a differentiable frame [17], so nondifferentiable wavelets like Haar wavelets or other lower-order Daubechies and Coiflets cannot be used with this method. Second, the redundancy of the frame (and the resulting $N_{\mathrm{eff}}$ of the $A$ matrix) depends on the shape of the wavelet's function and the chosen shifts and scales of this function. Other wavelet types, and other choices of shift and scale, may exhibit better or worse performance and dimensionality reduction, and this is an important question for future work.
239
+
240
+ Additionally, we emphasize that the choice of frame is application-dependent. If the signal is known to be smooth and periodic, a wavelet-based SSM is not likely to outperform a Fourier-based SSM, for example. The introduction of WaLRUS is not intended to be a one-size-fits-all model, but rather a broadly-applicable tool that combines compressive online function-approximation SSMs with the expressive power of wavelets.
241
+
242
+ # 6 Conclusions
243
+
244
+ We have demonstrated in this paper how function approximation with SSMs, initially proposed by [9] and subsequently extended to general frames, can be improved using wavelet-based SSMs. SSMs constructed with wavelet frames can provide higher fidelity in signal reconstruction than the state-of-the-art Legendre and Fourier-based SSMs over both scaled and translated measures. Future work will explore alternate wavelet families, and the trade-offs in effective size, frequency space coverage, and representation capabilities of different frames.
245
+
246
+ Moreover, since the Legendre-based HiPPO SSM forms the core of S4 and Mamba, and WaLRUS provides a drop-in replacement for HiPPO, WaLRUS could be used to initialize SSM-based machine learning models—potentially providing more efficient training. As AI becomes ubiquitous, and the demand for computation explodes, smarter and more task-tailored ML architectures can help mitigate the strain on energy and environmental resources.
247
+
248
+ # Acknowledgments
249
+
250
+ Special thanks to T. Mitchell Roddenberry for fruitful conversations and insights. This work was supported by NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N00014-18-12571, N00014-20-1-2534, N00014-18-1-2047, and MURI N00014-20-1-2787; AFOSR grant FA9550-22-1-0060; DOE grant DE-SC0020345; and ONR grant N00014-18-1-2047. Additional support was provided by a Vannevar Bush Faculty Fellowship, the Rice Academy of Fellows, and Rice University and Houston Methodist 2024 Seed Grant Program.
251
+
252
+ # References
253
+
254
+ [1] Nikola Zubić, Mathias Gehrig, and Davide Scaramuzza. State space models for event cameras. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024.
255
+ [2] Sina Alemohammad, Hossein Babaei, Randall Balestriero, Matt Y. Cheung, Ahmed Imtiaz Humayun, Daniel LeJeune, Naiming Liu, Lorenzo Luzi, Jasper Tan, Zichao Wang, and Richard G. Baraniuk. Wearing a mask: Compressed representations of variable-length sequences using recurrent neural tangent kernels. In 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2950-2954, 2021.
256
+ [3] Eric Nguyen, Karan Goel, Albert Gu, Gordon Downs, Preey Shah, Tri Dao, Stephen Baccus, and Christopher Ré. S4ND: Modeling images and videos as multidimensional signals with state spaces. In Advances in Neural Information Processing Systems, 2022.
257
+ [4] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735-1780, 1997.
258
+
259
+ [5] Jeffrey L. Elman. Finding structure in time. Cognitive Science, 14(2):179-211, 1990.
260
+ [6] Mike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681, 1997.
261
+ [7] Alex Graves and Jürgen Schmidhuber. Frameworkwise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks, 18(5):602-610, 2005.
262
+ [8] Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
263
+ [9] Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, and Christopher Ré. HiPPO: Recurrent memory with optimal polynomial projections. In Advances in Neural Information Processing Systems, 2020.
264
+ [10] Albert Gu, Karan Goel, and Christopher Re. Efficiently modeling long sequences with structured state spaces. In International Conference on Learning Representations, 2022.
265
+ [11] Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023.
266
+ [12] Albert Gu, Isys Johnson, Aman Timalsina, Atri Rudra, and Christopher Re. How to train your HiPPO: State space models with generalized orthogonal basis projections. In International Conference on Learning Representations, 2023.
267
+ [13] Ankit Gupta, Albert Gu, and Jonathan Berant. Diagonal state spaces are as effective as structured state spaces. In Advances in Neural Information Processing Systems, 2024.
268
+ [14] Albert Gu, Karan Goel, Ankit Gupta, and Christopher Ré. On the parameterization and initialization of diagonal state space models. In Advances in Neural Information Processing Systems, 2022.
269
+ [15] Jimmy T.H. Smith, Andrew Warrington, and Scott Linderman. Simplified state space layers for sequence modeling. In International Conference on Learning Representations, 2023.
270
+ [16] Ramin Hasani, Mathias Lechner, Tsun-Hsuan Wang, Makram Chahine, Alexander Amini, and Daniela Rus. Liquid structural state-space models. In International Conference on Learning Representations, 2023.
271
+ [17] Hossein Babaei, Mel White, Sina Alemohammad, and Richard G Baraniuk. SaFARi: State-space models for frame-agnostic representation. arXiv preprint arXiv:2505.08977, 2025.
272
+ [18] Ingrid Daubechies. Ten lectures on wavelets. SIAM Press, 1992.
273
+ [19] Alan V Oppenheim. Discrete-Time Signal Processing. Pearson, 1999.
274
+ [20] Agostino Abbate, Casimer DeCusatis, and Pankaj K Das. Wavelets and Subbands: Fundamentals and Applications. Springer, 2012.
275
+ [21] George EP Box, Gwilym M Jenkins, Gregory C Reinsel, and Greta M Ljung. Time Series Analysis: Forecasting and Control. John Wiley & Sons, 2015.
276
+ [22] John G Proakis. Digital Signal Processing: Principles, Algorithms, and Applications. Pearson, 2001.
277
+ [23] Paolo Prandoni and Martin Vetterli. Signal Processing for Communications. EPFL Press, 2008.
278
+ [24] Guofeng Zhang, Tongwen Chen, and Xiang Chen. Performance recovery in digital implementation of analogue systems. SIAM Journal on Control and Optimization, 45(6):2207-2223, 2007.
279
+ [25] Patrice Abry, Patrick Flandrin, and Murad S. Taqqu. Self-similarity and long-range dependence through the wavelet lens. In Paul Doukhan, George Oppenheim, and Murad S. Taqqu, editors, Theory and Applications of Long-Range Dependence, pages 527–556. Birkhäuser, 2003.
280
+
281
+ [26] Alfred Haar. Zur Theorie der Orthogonalen Funktionensysteme. PhD thesis, University of Göttingen, 1909.
282
+ [27] A. Grossmann and J. Morlet. Decomposition of Hardy functions into square integrable wavelets of constant shape. SIAM Journal on Mathematical Analysis, 15(4):723-736, 1984.
283
+ [28] Ingrid Daubechies. Orthonormal bases of compactly supported wavelets. Communications on Pure and Applied Mathematics, 41(7):909-996, 1988.
284
+ [29] Stéphane Mallat. A Wavelet Tour of Signal Processing: The Sparse Way. Academic Press, 3rd edition, 2008.
285
+ [30] Stephane Mallat and Wen Liang Hwang. Singularity detection and processing with wavelets. IEEE Transactions on Information Theory, 38(2):617-643, 1992.
286
+ [31] Tianpei Zhang, Yiming Zhu, Jufeng Zhao, Guangmang Cui, and Yuchen Zheng. Exploring state space model in wavelet domain: An infrared and visible image fusion network via wavelet transform and state space model, 2025.
287
+ [32] Wenbin Zou, Hongxia Gao, Weipeng Yang, and Tongtong Liu. Wave-mamba: Wavelet state space model for ultra-high-definition low-light image enhancement. In Proceedings of the 32nd ACM International Conference on Multimedia, page 1534-1543. Association for Computing Machinery, 2024.
288
+ [33] O Christensen. An Introduction to Frames and Riesz Bases. Birkhauser, 2003.
289
+ [34] Karlheinz Grochenig. Foundations of Time-Frequency Analysis. Springer, 2001.
290
+ [35] Michael Elad and Michal Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image processing, 15(12):3736-3745, 2006.
291
+ [36] Spyros Makridakis, Evangelos Spiliotis, and Vassilios Assimakopoulos. The M4 competition: 100,000 time series and 61 forecasting methods. International Journal of Forecasting, 36(1):54-74, 2020.
292
+ [37] Pete Warden. Speech commands: A dataset for limited-vocabulary speech recognition. arXiv preprint arXiv:1804.03209, 2018.
293
+ [38] David L Donoho and Iain M Johnstone. Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81(3):425-455, 1994.
294
+
295
+ # A Appendix
296
+
297
+ # A.1 SaFARi derivation for arbitrary frame
298
+
299
+ Where HiPPO [9] provided closed-form solutions to construct $A$ and $B$ for a few polynomial bases, SaFARi [17] introduced a method to build $A$ and $B$ from any arbitrary frame. The derivations provided below follow [17], and are given here as convenient reference for the reader.
300
+
301
+ Take a signal $f$ and frame $\psi$ . To get a vector of weights representing a signal on a basis, we use the inner product:
302
+
303
+ $$
304
+ c _ {n} = \int f (t) \overline {{\phi (t)}} d t \tag {A.1}
305
+ $$
306
+
307
+ So at some time $T$ , we scale the magnitude of $f(t)$ and stretch the basis to match the length of $f$ :
308
+
309
+ $$
310
+ c _ {n} (T) = \int_ {t _ {0}} ^ {T} f (t) \left(\frac {1}{T - t _ {0}}\right) \overline {{\phi \left(\frac {t - t _ {0}}{T - t _ {0}}\right)}} d t \tag {A.2}
311
+ $$
312
+
313
+ We are actually interested in the change in $c$ . We will take the partial derivative with respect to $T$ , since the coefficients update at each new time $T$ . Call the start time $t_0$ : this is 0 for the scaling case, and $t_0$ varies with the windowed case. If we call the size of the window $\theta$ , then $t_0 = T - \theta$ . The derivation below will be a generic version, then we will separate the two cases.
314
+
315
+ $$
316
+ \frac {d}{d T} c _ {n} (T) = \frac {d}{d T} \int_ {t _ {0}} ^ {T} f (t) \left(\frac {1}{T - t _ {0}}\right) \overline {{\phi \left(\frac {t - t _ {0}}{T - t _ {0}}\right)}} d t \tag {A.3}
317
+ $$
318
+
319
+ We note that this is the partial derivative of an integral bounded by two variables. Thus we call on Leibniz' integration rule and find:
320
+
321
+ $$
322
+ \begin{array}{l} \frac {d}{d T} c _ {n} (T) = f (T) \left(\frac {1}{T - t _ {0}}\right) \overline {{\phi (1)}} \frac {\delta}{\delta T} (T) - f (t _ {0}) \left(\frac {1}{T - t _ {0}}\right) \overline {{\phi (0)}} \frac {\delta}{\delta T} (t _ {0}) \\ + \int_ {t _ {0}} ^ {T} f (t) \underbrace {\frac {\delta}{\delta T} \left[ \left(\frac {1}{T - t _ {0}}\right) \overline {{\phi \left(\frac {t - t _ {0}}{T - t _ {0}}\right)}} \right]} _ {\overline {{h (t)}}} d t \tag {A.4} \\ \end{array}
323
+ $$
324
+
325
+ Some manipulation of the $h(t)$ term yields:
326
+
327
+ $$
328
+ \begin{array}{l} h (t) = \left(\frac {1}{T - t _ {0}}\right) \left[ - \frac {\delta (t _ {0})}{\delta T} \left(\frac {1}{T - t _ {0}}\right) \phi^ {\prime} \left(\frac {t - t _ {0}}{T - t _ {0}}\right) - \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \left(\frac {t - t _ {0}}{T - t _ {0}}\right) \phi^ {\prime} \left(\frac {t - t _ {0}}{T - t _ {0}}\right) \right] \\ - \left(\frac {1}{T - t _ {0}}\right) \left[ \left(\frac {1 - \frac {\delta (t _ {0})}{\delta T}}{T - t _ {0}}\right) \phi \left(\frac {t - t _ {0}}{T - t _ {0}}\right) \right] \tag {A.5} \\ \end{array}
329
+ $$
330
+
331
+ Our $h(t)$ term now has the derivative of our basis $(\phi')$ in it, but we'd like to be able to combine combine terms with $\phi$ . Therefore we can make a mapping from $\phi' \rightarrow \phi$ using the dual, $\widetilde{\phi}$ :
332
+
333
+ $$
334
+ \phi^ {\prime} \left(\frac {t - t _ {0}}{T - t _ {0}}\right) = \underbrace {\left\langle \phi^ {\prime} \left(\frac {t - t _ {0}}{T - t _ {0}}\right) , \widetilde {\phi} \left(\frac {t - t _ {0}}{T - t _ {0}}\right) \right\rangle} _ {P} \phi \left(\frac {t - t _ {0}}{T - t _ {0}}\right) \tag {A.6}
335
+ $$
336
+
337
+ Likewise:
338
+
339
+ $$
340
+ \left(t - t _ {0}\right) \phi^ {\prime} \left(\frac {t - t _ {0}}{T - t _ {0}}\right) = \underbrace {\left\langle \left(t - t _ {0}\right) \phi^ {\prime} \left(\frac {t - t _ {0}}{T - t _ {0}}\right) , \widetilde {\phi} \left(\frac {t - t _ {0}}{T - t _ {0}}\right) \right\rangle} _ {P _ {t}} \phi \left(\frac {t - t _ {0}}{T - t _ {0}}\right) \tag {A.7}
341
+ $$
342
+
343
+ This lets us do another simplification of $h(t)$ , and group all the functions of $\phi$ . Let's also call $T - t_0 = \theta$ to save some space.
344
+
345
+ $$
346
+ h (t) = \frac {1}{\theta} \phi \left(\frac {t - t _ {0}}{T - t _ {0}}\right) \left[ - \frac {\delta (t _ {0})}{\delta T} \frac {1}{\theta} P - \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \frac {1}{\theta} P _ {t} - \frac {1}{\theta} \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \right] \tag {A.8}
347
+ $$
348
+
349
+ Now we can return to Eq. A.4. P is not a function of $t$ , so it can be moved outside the integral. For the measures we are looking at, $\frac{d}{dT}$ is always constant with respect to $t$ – it is either 0 or 1. We can substitute then group as follows:
350
+
351
+ $$
352
+ \frac {d}{d T} c _ {n} (T) = \left(\frac {1}{T - t _ {0}}\right) \left[ f (T) \overline {{\phi (1)}} - f (t _ {0}) \overline {{\phi (0)}} \frac {\delta}{\delta T} (t _ {0}) \right] +
353
+ $$
354
+
355
+ $$
356
+ \left(\frac {1}{T - t _ {0}}\right) \left[ - \frac {\delta (t _ {0})}{\delta T} \bar {P} - \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \bar {P} _ {t} - \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \right] \underbrace {\int_ {t _ {0}} ^ {T} f (t) \left(\frac {1}{T - t _ {0}}\right) \overline {{\phi \left(\frac {t - t _ {0}}{T - t _ {0}}\right)}} d t} _ {c (T)} \tag {A.9}
357
+ $$
358
+
359
+ Noting that the final term in this equation contains Eq. A.2, we can simplify further:
360
+
361
+ $$
362
+ \frac {d}{d T} c _ {n} (T) = \left(\frac {1}{T - t _ {0}}\right) \left[ f (T) \overline {{\phi (1)}} - f (t _ {0}) \overline {{\phi (0)}} \frac {\delta (t _ {0})}{\delta T} \right] + \tag {A.10}
363
+ $$
364
+
365
+ $$
366
+ \left(\frac {1}{T - t _ {0}}\right) c (T) \left[ \frac {- \delta (t _ {0})}{\delta T} \overline {{P}} - \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \overline {{P _ {t}}} - \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \right]
367
+ $$
368
+
369
+ Unfortunately, we still have a term $f(t_0)$ that we don't have access to; this is the value of the function at the start of our window. But we have not stored this value; that would defeat the point of an online update in the first place. Instead, we will approximate it based on our current coefficient vector and our known basis.
370
+
371
+ $$
372
+ c = \langle \phi , f \rangle
373
+ $$
374
+
375
+ $$
376
+ f = \langle \widetilde {\phi}, c \rangle
377
+ $$
378
+
379
+ $$
380
+ f (t _ {0}) = \langle \tilde {\phi} (0), c (T) \rangle
381
+ $$
382
+
383
+ We now have an update rule for $c$ that depends only on the frame $\phi$ , the current value of $c(T)$ , and the new information from the signal, $f(T)$ :
384
+
385
+ $$
386
+ \begin{array}{l} \frac {d}{d T} c (T) = \left(\frac {1}{T - t _ {0}}\right) \left[ f (T) \overline {{\phi (1)}} - \widetilde {\phi} (0) c (T) \overline {{\phi (0)}} \frac {\delta (t _ {0})}{\delta T} \right] \tag {A.11} \\ \left. - \left(\frac {1}{T - t _ {0}}\right) \left[ c (T) \left[ \frac {\delta (t _ {0})}{\delta T} \bar {P} + \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \bar {P _ {t}} + \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \right] \right] \right. \\ \end{array}
387
+ $$
388
+
389
+ # A.1.1 The scaled case
390
+
391
+ In the case of scaling, $t_0 = 0$ and $\frac{\delta}{\delta T} (t_0) = 0$
392
+
393
+ $$
394
+ \begin{array}{l} \frac {d}{d T} c _ {n} (T) = \left(\frac {1}{T}\right) \left[ f (T) \overline {{\phi (1)}} - \widetilde {\phi} (0) c (T) \overline {{\phi (0)}} \frac {\delta \left(t _ {0}\right)}{\delta T} \right] ^ {0} (A.12) \\ - \left(\frac {1}{T}\right) c (T) \left[ \frac {\delta \left(t _ {0}\right)}{\delta T} \overrightarrow {P} + \left(1 - \frac {\delta \left(t _ {0}\right)}{\delta T}\right) \overrightarrow {P _ {t}} + \left(1 - \frac {\delta \left(t _ {0}\right)}{\delta T}\right) ^ {0} \right] (A.13) \\ \end{array}
395
+ $$
396
+
397
+ $$
398
+ \frac {d}{d T} c _ {n} (T) = \left(\frac {1}{T}\right) f (T) \overline {{\phi (1)}} - \left(\frac {1}{T}\right) c (T) \left(\bar {P _ {t}} + 1\right) \tag {A.14}
399
+ $$
400
+
401
+ The A matrix acts on the coefficient vector $c$ , and B acts on the current input, $f(T)$ . Expressed in matrix notation:
402
+
403
+ $$
404
+ \frac {d}{d T} c _ {n} (T) = - \frac {1}{T} \underbrace {\left(\bar {P _ {t}} + I\right)} _ {A} c (T) + \frac {1}{T} \underbrace {\phi (1)} _ {B} f (T) \tag {A.15}
405
+ $$
406
+
407
+ Equivalently,
408
+
409
+ $$
410
+ \frac {d}{d T} c _ {n} (T) = - \frac {1}{T} \underbrace {\left(\left\langle \widetilde {\phi} \left(\frac {t}{T}\right) , t \phi \left(\frac {t}{T}\right) ^ {\prime} \right\rangle + I\right)} _ {A} c (T) + \frac {1}{T} \underbrace {\overline {{\phi (1)}}} _ {B} f (T) \tag {A.16}
411
+ $$
412
+
413
+ # A.1.2 The translated case
414
+
415
+ Now $T - t_0 = \theta$ where $\theta$ is the window size, and $\frac{\delta}{\delta T}(t_0) = 1$ . Following the same procedure as the previous section:
416
+
417
+ $$
418
+ \frac {d}{d T} c _ {n} (T) = \left(\frac {1}{\theta}\right) f (T) \overline {{\phi (1)}} - \left(\frac {1}{\theta}\right) c (T) \left[ \widetilde {\phi} (0) \overline {{\phi (0)}} + \bar {P} \right] \tag {A.17}
419
+ $$
420
+
421
+ $$
422
+ \frac {d}{d T} c _ {n} (T) = - \frac {1}{\theta} \underbrace {\left(\bar {P} + \tilde {\phi} (0) \overline {{\phi (0)}}\right)} _ {A} c (T) + \frac {1}{\theta} \underbrace {\overline {{\phi (1)}}} _ {B} f (T) \tag {A.18}
423
+ $$
424
+
425
+ $$
426
+ \frac {d}{d T} c _ {n} (T) = - \frac {1}{\theta} \underbrace {\left(\left\langle \widetilde {\phi} \left(\frac {t}{\theta}\right) , \phi^ {\prime} \left(\frac {t}{\theta}\right) \right\rangle + \widetilde {\phi} (0) \overline {{\phi (0)}}\right)} _ {A} c (T) + \frac {1}{\theta} \underbrace {\overline {{\phi (1)}}} _ {B} f (T) \tag {A.19}
427
+ $$
428
+
429
+ # A.2 Experiments
430
+
431
+ # A.2.1 Datasets
432
+
433
+ In this paper, we conducted our experiments on these datasets:
434
+
435
+ M4 forecasting competition: The M4 forecasting competition dataset [36] consists of 100,000 univariate time series from six domains: demographic, finance, industry, macro, micro, and other. The data covers various frequencies (hourly, daily, weekly, monthly, quarterly, yearly) and originates from sources like censuses, financial markets, industrial reports, and economic surveys. It is designed to benchmark forecasting models across diverse real-world applications, accommodating different horizons and data lengths. We test on 3,000 random instances.
436
+
437
+ Speech commands: The speech commands dataset [37] is a set of 400 audio files, each containing a single spoken English word or background noise with about one second duration. These words are from a small set of commands, and are spoken by a variety of different speakers. This data set is designed to help train simple machine learning models.
438
+
439
+ Wavelet benchmark collection: Donoho [38] introduced a collection of popular wavelet benchmark signals, each designed to capture different types of singularities. This benchmark includes well-known signals such as Bumps, Blocks, Spikes, and Piecewise Polynomial. Following this model, we synthesize random signals belonging to the classes of bumps, blocks, spikes, and piecewise polynomials. Details and examples of these signals can be found in Appendix A.2.2.
440
+
441
+ # A.2.2 Wavelet Benchmark Collection
442
+
443
+ Donoho [38] introduced a collection of popular wavelet benchmark signals, each designed to capture different types of singularities. This benchmark includes well-known signals such as Bumps, Blocks, Spikes, and Piecewise Polynomial.
444
+
445
+ Following this model, we synthesize random signals belonging to the classes of bumps, blocks, spikes, and piecewise polynomials in our experiments to compare the fidelity of DaubS to legS and fouS, and also to compare the fidelity of DaubT to LegT and FouT.
446
+
447
+ Figure 9 demonstrates a random instance from each of the classes of the signals that we have in our wavelet benchmark collection.
448
+
449
+ ![](images/a9b4183ba0947f1aaef108fb3ee0c48c35508d4e33f185a58a814717652ff63d.jpg)
450
+
451
+ ![](images/5a0b0f60fe9142f79f883207e6bb091a17720bb6abc9f484e6064f1c4af171c5.jpg)
452
+
453
+ ![](images/1656b4f3e4984f7c5e66ef6f32117f27d79f51480d1510c8b51a8b3978b8793c.jpg)
454
+ Figure 9: Instances of different signal types in the wavelet benchmark collection. Top Left: Blocks is a piecewise constant signal with random-hight sharp jumps placed randomly. Top Right: Bumps is a collection of random pulses where each pulse contains a cusp. Bottom Left: Piecepoly is a piecewise polynomial signal with discontinuity in the transition between different polynomial parts. Bottom Right: Spikes is a collection of rectangular pulses placed randomly with random positive hieght.
455
+
456
+ ![](images/b22dba8efaecdcd3796145e4690b55870bc3418addf7c8c89c2514c35180dda5.jpg)
457
+
458
+ # A.2.3 Description of metrics for 'Spikes' and 'Bumps' experiments
459
+
460
+ - Peaks Missed The number of true peaks in the signal is $N_{tp}$ , and the number of detected peaks (that is, where the estimated signal surpasses an amplitude threshold $Th_{amp}$ ), is $N_{dp}$ . $N_{dp|tp}$ is the number of detected peaks where a true peak is also within a displacement threshold $(Th_{dis})$ of the detected peak.
461
+
462
+ $$
463
+ \text{Peaks Missed} = \left(1 - \frac {N _ {d p \mid t p}}{N _ {t p}}\right) \times 100 \%
464
+ $$
465
+
466
+ - False Peaks The metric False Peaks is calculated as the percentage of detected peaks that occurred when there was not a true peak within the displacement threshold. The number of detected peaks when there was no true peak is represented by $N_{dp|\overline{tp}}$ .
467
+
468
+ $$
469
+ \text{False Peaks} = \frac {N _ {d p} \left| \overline {{t p}} \right.}{N _ {d p}} \times 100 \%
470
+ $$
471
+
472
+ - Instance-wise Wins In each of $K$ time-series instances $S$ , Each SSM $m$ gets the instance win over other SSM models if it captures more true peaks than the other models.
473
+
474
+ $$
475
+ \text{Instance-wise Wins} = \frac {1}{K} \sum_ {k = 1} ^ {K} w _ {k} \times 100 \%
476
+ $$
477
+
478
+ $$
479
+ w _ {k} = \left\{ \begin{array}{l l} 1, & \text {i f} \quad \text {P e a k s M i s s e d} _ {m} \leq \text {P e a k s M i s s e d} _ {\text {o t h e r s}}, \\ 0, & \text {O w}. \end{array} \right.
480
+ $$
481
+
482
+ In cases where multiple models achieve the same maximum, each tied model receives the credit for that time series instance. As a result, the sum of instance-wise wins for different SSMs may exceed 1.00.
483
+
484
+ - Relative Amplitude Error The relative amplitude error is calculated as the average percent error in the estimated amplitude of detected peaks, including false peaks.
485
+
486
+ $$
487
+ \text{Relative Amplitude Error} = \frac {1}{N _ {d p}} \left(\sum_ {n = 1} ^ {N _ {d p} | t p} \frac {\left| A _ {t p , n} - A _ {d p} | t p , n \right|}{A _ {t p , n}}\right) \times 100 \%
488
+ $$
489
+
490
+ - Average Displacement The location of a detected peak where a true peak was within a displacement threshold is given by $X_{dp|tp}$ . The location of the true peak is denoted as $X_{tp}$ .
491
+
492
+ $$
493
+ \text {A v e r a g e D i s p l a c e m e n t} = \frac {1}{N _ {d p}} \sum_ {n = 1} ^ {N _ {d p}} \left| X _ {t p, n} - X _ {d p | t p, n} \right|
494
+ $$
495
+
496
+ # A.2.4 Wavelet frames used for each experiment
497
+
498
+ Unlike HiPPO-based SSMs, which are fully characterized by their state size $N$ , WaLRUS employs redundant wavelet frames that require additional parameters for identification. Once the wavelet frame is defined, the SaFARi framework constructs the unique $A$ , $B$ matrices corresponding to that frame. The key parameters for specifying a redundant wavelet frame in WaLRUS are as follows:
499
+
500
+ - Wavelet Function: Wavelet frames are built from a mother wavelet and a father wavelet, which capture high-frequency details and low-frequency approximations, respectively. Different families such as Daubechies, Morlet, Symlet, and Coifflet provide varied wavelet functions. For this work, we use the D22 wavelet from the Daubechies family.
501
+ - L (Frame Length): This represents the length of the wavelet frame. Increasing $L$ increases numerical accuracy in the calculation of the $A$ and $B$ matrices at the cost of additional computation time. However, this initial computation need only be performed once, so it is best to choose a large $L$ . For the experiments in this work, we set $L = 2^{19}$ .
502
+ - Scale min and $N_{\mathrm{eff}}$ : The minimum scale sets the smallest feature of the signal that can be represented by the frame. This parameter should be chosen based on knowledge about the signal of interest and its component frequencies. Note that the size of the smallest feature is relative to the length of the signal under consideration, so this value may differ under scaling and translating measures.
503
+
504
+ For wavelets, scale min also controls the effective rank, $N_{\mathrm{eff}}$ . Each new lower scale introduces a factor of two in the effective rank of the frame, owing to the additional shifted elements in each scale. Fig. 3 shows two scales, where there are 3 father wavelets $(\phi_0)$ and 3 coarse-scale mother wavelets $(\psi_1)$ . The next scale introduces 6 scaled and shifted mother wavelets $(\psi_2)$ , the next would include 12, and so on. Table 3 also illustrates this pattern, with scale min of 0 corresponding to $N_{\mathrm{eff}}$ of $2^{6}$ , scale min of -1 corresponding to $N_{\mathrm{eff}}$ of $2^{7}$ , and so on, with some margin of error for numerical accuracy and truncation.
505
+
506
+ Our code includes another variable, scale max. Since smaller scales can also combine to represent larger scales, scale max in fact has no impact on $N_{\mathrm{eff}}$ (see [29] for further information). Fig. 10 demonstrates on an example implementation that varying scale max does not impact the size of $N_{\mathrm{eff}}$ . It is also easily shown that varying scale max results in the same diagonalized A; see our code supplement. Adding coarser scales can help improve numerical accuracy in the calculation of A, however. We do not include scale max in Table 3, but we do provide it in our code with each experiment for reproducibility.
507
+
508
+ ![](images/8889b545668b793f9fb4f36491f8a9c702f8a81b4402f005bff09551866e1997.jpg)
509
+ Figure 10: Effective Rank of WaLRUS $A$ matrix with Scale Min=-3, shift=0.01
510
+
511
+ - Shift: At scale $i$ , $2^{-i}m$ overlapping shifts are applied to the wavelets, where $0 < m \leq 1$ is a shift constant. Setting $m = 1$ corresponds to dyadic shifts. As our wavelet frames typically
512
+
513
+ <table><tr><td>Experiment</td><td>Basis/Measure</td><td>scale min</td><td>shift</td><td>\(N_{\text{eff}}\)</td></tr><tr><td rowspan="3">Scaled M4</td><td>WaveS</td><td>-3</td><td>0.01</td><td>501</td></tr><tr><td>LegS</td><td>-</td><td>-</td><td>500</td></tr><tr><td>FouS</td><td>-</td><td>-</td><td>500</td></tr><tr><td rowspan="3">Scaled Speech</td><td>WaveS</td><td>-5</td><td>0.01</td><td>1995</td></tr><tr><td>LegS</td><td>-</td><td>-</td><td>1995</td></tr><tr><td>FouS</td><td>-</td><td>-</td><td>1995</td></tr><tr><td rowspan="3">Scaled synthetic</td><td>WaveS</td><td>-3</td><td>0.01</td><td>501</td></tr><tr><td>LegS</td><td>-</td><td>-</td><td>500</td></tr><tr><td>FouS</td><td>-</td><td>-</td><td>500</td></tr><tr><td rowspan="3">Scaled peak detection</td><td>WaveS</td><td>0</td><td>0.01</td><td>65</td></tr><tr><td>LegS</td><td>-</td><td>-</td><td>65</td></tr><tr><td>FouS</td><td>-</td><td>-</td><td>65</td></tr><tr><td rowspan="3">Translated M4</td><td>WaveT</td><td>-1</td><td>0.01</td><td>128</td></tr><tr><td>LegT</td><td>-</td><td>-</td><td>128</td></tr><tr><td>FouT</td><td>-</td><td>-</td><td>128</td></tr><tr><td rowspan="3">Translated Speech</td><td>WaveT</td><td>-3</td><td>0.0025</td><td>500</td></tr><tr><td>LegT</td><td>-</td><td>-</td><td>500</td></tr><tr><td>FouT</td><td>-</td><td>-</td><td>500</td></tr><tr><td rowspan="3">Translated synthetic</td><td>WaveT</td><td>-1</td><td>0.01</td><td>128</td></tr><tr><td>LegT</td><td>-</td><td>-</td><td>128</td></tr><tr><td>FouT</td><td>-</td><td>-</td><td>128</td></tr><tr><td rowspan="3">Translated peak detection</td><td>WaveT</td><td>0</td><td>0.01</td><td>65</td></tr><tr><td>LegT</td><td>-</td><td>-</td><td>65</td></tr><tr><td>FouT</td><td>-</td><td>-</td><td>65</td></tr></table>
514
+
515
+ Table 3: Parameters for the redundant wavelet frame used by WaLRUS in different experiments. All of the above experiment share the parameters $L = 2^{19}$ , and rcond = 0.01.
516
+
517
+ only contain a few dilation levels, using $m = 1$ can mean that the constructed set of vectors no longer satisfies the frame condition, and is lossy. We choose a small value (0.01 for most experiments), and tune this as needed.
518
+
519
+ - rcond: This parameter controls the numerical stability of the pseudo-inverse calculation for the dual frame. Singular values smaller than $\operatorname{rcond} \times \sigma_{\max}$ are discarded during the inversion process to maintain numerical stability.
520
+
521
+ Note that all the above parameters are solely to identify the redundant wavelet frame, and that WaLRUS does not introduce any new parameters. Table 3 summarizes the settings for all experiments, alongside the SSM sizes for HiPPO-Legendre and HiPPO-Fourier.
522
+
523
+ # A.2.5 Computational resources
524
+
525
+ Within the scope of this paper, no networks were trained and no parameters were learned. Only CPU resources were utilized, but speed could be improved with parallel resources on a GPU. Using WaLRUS to find representation has two different stages:
526
+
527
+ - Pre-computing: Computing SSM $A$ matrices and diagonalizing them. This step can be computationally intensive, but need only be calculated once.
528
+ - Computation: Using SSM $A$ matrices to find representations of signals.
529
+
530
+ For all our experiments except Scaled-Speech, the pre-computing stage takes less than 10 minutes. For scaled-speech, the pre-compute time is on the order of hours. Once the A matrices are computed and stored, run time is the same for all experiments.
531
+
532
+ # NeurIPS Paper Checklist
533
+
534
+ # 1. Claims
535
+
536
+ Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
537
+
538
+ Answer: [Yes]
539
+
540
+ Justification: Our abstract and introduction states that we introduce the use of wavelets in state-space models for online function representation, and show how these can outperform state-of-the-art polynomial models for certain data types. Section 3 describes the construction of wavelet-based SSMs, and section 4 experimentally supports our performance claims.
541
+
542
+ Guidelines:
543
+
544
+ - The answer NA means that the abstract and introduction do not include the claims made in the paper.
545
+ - The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.
546
+ - The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
547
+ - It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
548
+
549
+ # 2. Limitations
550
+
551
+ Question: Does the paper discuss the limitations of the work performed by the authors?
552
+
553
+ Answer: [Yes]
554
+
555
+ Justification: Section 5 describes limitations, both in terms of what we have implemented in this work, as well as limitations in the use of our method.
556
+
557
+ Guidelines:
558
+
559
+ - The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.
560
+ - The authors are encouraged to create a separate "Limitations" section in their paper.
561
+ - The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.
562
+ - The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.
563
+ - The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.
564
+ - The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.
565
+ - If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.
566
+ - While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.
567
+
568
+ # 3. Theory assumptions and proofs
569
+
570
+ Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
571
+
572
+ Answer: [Yes]
573
+
574
+ Justification: All necessary theoretical background is given in Sec. 2 and full support for our results are in sections 3-4.
575
+
576
+ Guidelines:
577
+
578
+ - The answer NA means that the paper does not include theoretical results.
579
+ - All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.
580
+ - All assumptions should be clearly stated or referenced in the statement of any theorems.
581
+ - The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.
582
+ - Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.
583
+ - Theorems and Lemmas that the proof relies upon should be properly referenced.
584
+
585
+ # 4. Experimental result reproducibility
586
+
587
+ Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
588
+
589
+ Answer: [Yes]
590
+
591
+ Justification: The Experiments section thoroughly describes what metrics were tested and how they were evaluated, as well as the publicly available datasets used. Scripts to replicate the experimental results are available at https://github.com/echbaba/walrus
592
+
593
+ Guidelines:
594
+
595
+ - The answer NA means that the paper does not include experiments.
596
+
597
+ - If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.
598
+
599
+ - If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.
600
+
601
+ - Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.
602
+
603
+ - While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example
604
+
605
+ (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.
606
+ (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.
607
+ (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).
608
+ (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in
609
+
610
+ some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.
611
+
612
+ # 5. Open access to data and code
613
+
614
+ Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
615
+
616
+ Answer: [Yes]
617
+
618
+ Justification: Code and data are available at https://osf.io/7kjcx/?view_only=5dc38b9776624deb9d1c0d8f88108658
619
+
620
+ Guidelines:
621
+
622
+ - The answer NA means that paper does not include experiments requiring code.
623
+ - Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
624
+ - While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).
625
+ - The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
626
+ - The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.
627
+ - The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.
628
+ - At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).
629
+ - Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.
630
+
631
+ # 6. Experimental setting/details
632
+
633
+ Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
634
+
635
+ Answer: [Yes]
636
+
637
+ Justification: All the required information on both the datasets, and the exact experimental setting required to recreate the wavelet frame, are provided in the Appendix. This information can also be found in our code.
638
+
639
+ Guidelines:
640
+
641
+ - The answer NA means that the paper does not include experiments.
642
+ - The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.
643
+ - The full details can be provided either with the code, in appendix, or as supplemental material.
644
+
645
+ # 7. Experiment statistical significance
646
+
647
+ Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
648
+
649
+ Answer: [Yes]
650
+
651
+ Justification: Error bars and quantiles are provided in figures 5 and 7, and explanations of their source are in the text and captions of the figures. Since MSE is not normally distributed, we chose to use quantiles and percentiles to reflect the distribution more accurately. We also provide tables 1 and 2 to describe additional nuances of the comparison data.
652
+
653
+ Guidelines:
654
+
655
+ - The answer NA means that the paper does not include experiments.
656
+ - The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.
657
+ - The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).
658
+ - The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)
659
+ - The assumptions made should be given (e.g., Normally distributed errors).
660
+ - It should be clear whether the error bar is the standard deviation or the standard error of the mean.
661
+ - It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified.
662
+ - For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).
663
+ - If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.
664
+
665
+ # 8. Experiments compute resources
666
+
667
+ Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
668
+
669
+ Answer: [Yes]
670
+
671
+ Justification: Since our work did not involve any training, no GPU computation was necessary. More discussion is available in the Appendix (Sec. A.2.5).
672
+
673
+ Guidelines:
674
+
675
+ - The answer NA means that the paper does not include experiments.
676
+ - The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.
677
+ - The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.
678
+ - The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper).
679
+
680
+ # 9. Code of ethics
681
+
682
+ Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
683
+
684
+ Answer: [Yes]
685
+
686
+ Justification: We have conducted this research with integrity and reported our findings with honesty. The link to the Code of Ethics provided is broken, and so we have instead consulted this provisional copy of the document: https://openreview.net/forum?id=zVoy8kAFKPr.
687
+
688
+ Guidelines:
689
+
690
+ - The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
691
+ - If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.
692
+ - The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).
693
+
694
+ # 10. Broader impacts
695
+
696
+ Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
697
+
698
+ Answer: [Yes]
699
+
700
+ Justification: This work is a basic mathematical result that does not have a targeted end use. We do note in our conclusion that improved function approximators, like the one we present here, can reduce the computational resources required for training certain types of neural networks – resources that have recently become a major environmental concern.
701
+
702
+ Guidelines:
703
+
704
+ - The answer NA means that there is no societal impact of the work performed.
705
+ - If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.
706
+ - Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.
707
+ - The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.
708
+ - The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.
709
+ - If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).
710
+
711
+ # 11. Safeguards
712
+
713
+ Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
714
+
715
+ Answer: [NA]
716
+
717
+ Justification: This is a foundational and theoretical work that is primarily mathematical in nature: a compressive online approximation of time-series signals over a wavelet frame. The potential use cases for such a tool are similar in scope to that of a Fourier Transform; that is, it is too broad to responsibly hypothesize specific use cases or create guidelines.
718
+
719
+ Guidelines:
720
+
721
+ - The answer NA means that the paper poses no such risks.
722
+ - Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.
723
+ - Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.
724
+ - We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.
725
+
726
+ # 12. Licenses for existing assets
727
+
728
+ Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
729
+
730
+ # Answer: [Yes]
731
+
732
+ Justification: the M4 dataset does not have a required license: https://paperswithcode.com/dataset/m4. The SpeechCommands dataset has a CC BY license, allowing for unrestricted use, with attribution to the author: https://huggingface.co/datasets/google/speech Commands. The four other data types we test on are generated by code that is made available with this paper, and based on [38].
733
+
734
+ # Guidelines:
735
+
736
+ - The answer NA means that the paper does not use existing assets.
737
+ - The authors should cite the original paper that produced the code package or dataset.
738
+ - The authors should state which version of the asset is used and, if possible, include a URL.
739
+ - The name of the license (e.g., CC-BY 4.0) should be included for each asset.
740
+ - For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.
741
+ - If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.
742
+ - For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.
743
+ - If this information is not available online, the authors are encouraged to reach out to the asset's creators.
744
+
745
+ # 13. New assets
746
+
747
+ Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
748
+
749
+ # Answer: [Yes]
750
+
751
+ Justification: An implementation of WaLRUS is provided with the code.
752
+
753
+ # Guidelines:
754
+
755
+ - The answer NA means that the paper does not release new assets.
756
+ - Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.
757
+ - The paper should discuss whether and how consent was obtained from people whose asset is used.
758
+ - At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.
759
+
760
+ # 14. Crowdsourcing and research with human subjects
761
+
762
+ Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
763
+
764
+ # Answer: [NA]
765
+
766
+ Justification: There were no human subjects in this theoretical work.
767
+
768
+ # Guidelines:
769
+
770
+ - The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
771
+ - Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.
772
+ - According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.
773
+
774
+ # 15. Institutional review board (IRB) approvals or equivalent for research with human subjects
775
+
776
+ Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
777
+
778
+ Answer: [NA]
779
+
780
+ Justification: There were no study participants in this theoretical work.
781
+
782
+ Guidelines:
783
+
784
+ - The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
785
+ - Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.
786
+ - We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.
787
+ - For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
788
+
789
+ # 16. Declaration of LLM usage
790
+
791
+ Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required.
792
+
793
+ Answer: [NA]
794
+
795
+ Justification: We have used LLMs only to assist in writing and polishing the grammar.
796
+
797
+ Guidelines:
798
+
799
+ - The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components.
800
+ - Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described.
NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:212eae9c82ceeb31a7458b362a5a781db77115accdc696ae2c89e268693ae141
3
+ size 765746
NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:665fc17fc795f3ab4e4213905b76d5f2ea19825e3fab85c4a2c0428db5c659df
3
+ size 797415
NeurIPS/2025/Walking the Schrödinger Bridge_ A Direct Trajectory for Text-to-3D Generation/0f83e49a-64ce-4487-99d8-879796e12187_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74bdce8c880e3d78f3c1c38a42a6ec2fa8c1c0fabbc8c68c8ca667fa63396776
3
+ size 126408
NeurIPS/2025/Walking the Schrödinger Bridge_ A Direct Trajectory for Text-to-3D Generation/0f83e49a-64ce-4487-99d8-879796e12187_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ec2adbebdfbef430f94ec46f07ed4115ed6204da7d5ba0b9d71feb23d1030d5
3
+ size 172035
NeurIPS/2025/Walking the Schrödinger Bridge_ A Direct Trajectory for Text-to-3D Generation/0f83e49a-64ce-4487-99d8-879796e12187_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37fe7bdfc6c88c88c9ce52a6621d10caa04d95f528e3ba9fa139dfe1b8290323
3
+ size 2884064
NeurIPS/2025/Walking the Schrödinger Bridge_ A Direct Trajectory for Text-to-3D Generation/full.md ADDED
@@ -0,0 +1,627 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Walking the Schrödinger Bridge: A Direct Trajectory for Text-to-3D Generation
2
+
3
+ Ziying Li
4
+
5
+ Zhejiang University emmalee@zju.edu.cn
6
+
7
+ Xuequan Lu
8
+
9
+ University of Western Australia bruce.lu@uwa.edu.au
10
+
11
+ Xinkui Zhao*
12
+
13
+ Zhejiang University zhaoxinkui@zju.edu.cn
14
+
15
+ Guanjie Cheng
16
+
17
+ Zhejiang University chengguanjie@zju.edu.cn
18
+
19
+ Shuiguang Deng
20
+
21
+ Zhejiang University dengsg@zju.edu.cn
22
+
23
+ Jianwei Yin
24
+
25
+ Zhejiang University
26
+ zjuyjw@cs.zju.edu.cn
27
+
28
+ https://github.com/emmalee789/TraCe.git
29
+
30
+ ![](images/f90da411cc61f71f36a3baf43cacbfb7e2375d7f121233779665feea464c9367.jpg)
31
+ Figure 1: From left to right: (a) Standard VSD [45] ( $\text{CFG} = 7.5$ , CFG: Classifier-free Guidance); (b) Standard SDS [35]; (c) VSD [45] ( $\text{CFG} = 20$ ); (d) SDS [35] ( $\text{CFG} = 20$ ); (e) Ours ( $\text{CFG} = 20$ ). VSD with $\text{CFG} = 7.5$ and $\text{CFG} = 20$ both yield low-quality results. Standard SDS yields artifacts (e.g., over-smoothing) with high CFG, and SDS with low CFG yields low-quality results. Our method generates high-quality and high-fidelity results with a fair CFG value.
32
+
33
+ ![](images/fa6a63a42a91916a9104fb123c3499be2485a9b535fbdf0078d73449fe6e06f3.jpg)
34
+
35
+ ![](images/7f40c1e7f28860cf2587540947583008f958586ec3496f82d3cf43b4e2baa8d4.jpg)
36
+
37
+ ![](images/3418e68378272bfd1a38e52a706dd0667164eb0993ffdb37c8ea0057b94c6c60.jpg)
38
+
39
+ ![](images/3284fe19a89c8d1e17030956f4cd0fdb4b5ca09e433bbb6e2a91ba90db1fd566.jpg)
40
+
41
+ # Abstract
42
+
43
+ Recent advancements in optimization-based text-to-3D generation heavily rely on distilling knowledge from pre-trained text-to-image diffusion models using techniques like Score Distillation Sampling (SDS), which often introduce artifacts such as over-saturation and over-smoothing into the generated 3D assets. In this paper, we address this essential problem by formulating the generation process as learning an optimal, direct transport trajectory between the distribution of the current rendering and the desired target distribution, thereby enabling high-quality generation with smaller Classifier-free Guidance (CFG) values. At first, we theoretically establish SDS as a simplified instance of the Schrödinger Bridge framework. We prove that SDS employs the reverse process of an Schrödinger
44
+
45
+ Bridge, which, under specific conditions (e.g., a Gaussian noise as one end), collapses to SDS's score function of the pre-trained diffusion model. Based upon this, we introduce Trajectory-Centric Distillation (TraCe), a novel text-to-3D generation framework, which reformulates the mathematically trackable framework of Schrödinger Bridge to explicitly construct a diffusion bridge from the current rendering to its text-conditioned, denoised target, and trains a LoRA-adapted model on this trajectory's score dynamics for robust 3D optimization. Comprehensive experiments demonstrate that TraCe consistently achieves superior quality and fidelity to state-of-the-art techniques.
46
+
47
+ # 1 Introduction
48
+
49
+ Generating three-dimensional content directly from textual descriptions has recently attracted intensive attentions in the research community. Recent methods leveraging explicit 3D representations like Gaussian Splatting have significantly accelerated the generation process [25, 3]. Despite the advancements, it remains a key bottleneck that the quality and fidelity of generated 3D assets often lag behind their 2D counterparts. This limitation is frequently attributed to the scarcity of large-scale, high-quality 3D datasets required for direct supervised training [27, 28, 10].
50
+
51
+ To bridge this gap, many state-of-the-art text-to-3D methods employ optimization strategies guided by powerful, pre-trained 2D text-to-image (T2I) diffusion models [36]. Score Distillation Sampling (SDS) [35] has become the cornerstone paradigms. SDS leverages powerful pre-trained 2D text-to-image diffusion models to guide the optimization of 3D representations. Nevertheless, the standard SDS approach typically requires high values for Classifier-Free Guidance (CFG) [13] to achieve strong text alignment [35, 47, 4, 24, 49]. This reliance on high CFG values is often problematic, leading to visual artifacts such as over-saturation [37] and over-smoothing [23] in the generated 3D assets. Recognizing these issues, several variants of SDS have been proposed recently [45, 29, 17, 44, 48, 11, 6]. However, these SDS-based methods, including the recent variants, face persistent challenges. Firstly, as analyzed in recent studies [45, 1, 24], SDS and its variants fundamentally operate by matching the gradient direction predicted by the T2I model. While differing in their specific source and target choices for computing this gradient, they all rely on score estimates derived from the T2I backbone. These score estimates, however, can be noisy and are not guaranteed to represent an optimal direction for 3D optimization (shown in Figure 2b), potentially causing unexpected artifacts. Secondly, variants designed to operate effectively at lower CFG values (e.g., CFG=7.5), such as Score Distillation via Inversion (SDI) [29] or Variational Score Distillation (VSD) [45], have shown limited success when applied to optimizing certain popular 3D representations like 3D Gaussian Splatting (3DGS), often yielding less-desired results (shown in Figure 1).
52
+
53
+ The aforementioned analysis underscores the limitations of existing approaches and highlights the urgent need of a more robust optimization framework for text-to-3D generation, one that does not solely rely on potentially noisy score matching or operate under restrictive guidance conditions. In this paper, we first provide a theoretical insight by establishing that SDS can be understood as a simplified instance of the Schrödinger Bridge framework [39]. We demonstrate (Section 4.1) that SDS implicitly employs the reverse process of an Schrödinger Bridge, which, under specific conditions such as Gaussian noise distribution at one endpoint, effectively collapses to utilizing the score function of the pre-trained diffusion model. This perspective not only clarifies the underlying dynamics of SDS but also illuminates pathways for more principled trajectory design. Based upon this reformulation, we introduce Trajectory-Centric Distillation (TraCe), a novel text-to-3D generation framework. TraCe formulates the mathematically tractable framework of Schrödinger Bridges [26, 26] to explicitly construct and learn a diffusion bridge for text-to-3D generation. This bridge connects the current rendering $(X_{1})$ to its text-conditioned, denoised target $(X_0^{\mathrm{pred}})$ , thereby defining a more stable and direct optimization trajectory (visualization in Figure 2a). TraCe then employs Low-Rank Adaptation (LoRA) [14] to fine-tune the T2I diffusion model specifically for navigating this constructed bridge, enabling it to precisely learn the score dynamics required for robust 3D optimization along this optimal trajectory towards the target distribution.
54
+
55
+ Our proposed TraCe framework, which operationalizes the direct transport path via Schrödinger Bridges, is rigorously evaluated. Extensive experiments demonstrate that this approach yields high-fidelity 3D assets with strong adherence to textual descriptions (Figure 4 and Table 1). The results consistently showcase TraCe's capacity to achieve superior visual quality and semantic coherence
56
+
57
+ in generated content (Figure 4 and Supplementary), highlighting the efficacy of our theoretically grounded direct trajectory optimization for text-to-3D generation.
58
+
59
+ In summary, our contributions are:
60
+
61
+ - We establish a novel theoretical connection, demonstrating that SDS can be precisely understood as a special case of the Schrödinger Bridge framework. This reformulation clarifies the underlying transport dynamics implicitly leveraged by SDS.
62
+
63
+ - We introduce Trajectory-Centric Distillation (TraCe), a new text-to-3D generation framework. TraCe explicitly learns an optimal transport path, guided by a tractable Schrödinger Bridge formulation, between the current 3D model's rendering and a dynamically estimated, text-aligned target view. This is achieved by constructing and sampling along this explicit diffusion bridge, enabling more direct and stable 3D optimization.
64
+
65
+ - Experiments demonstrate that our TraCe achieves high-quality 3D generation, surpassing current state-of-the-art techniques. TraCe exhibits enhanced robustness, particularly excelling in challenging low CFG values where the performance of existing methods typically degrades.
66
+
67
+ # 2 Related Work
68
+
69
+ Distilling 2D into 3D. Leveraging large-scale, pre-trained text-to-image (T2I) diffusion models [36] as priors has become a prominent technique for generation tasks in data-scarce domains, such as text-to-3D generation. SDS [35] is a seminal approach in this direction, enabling optimization of parametric representations (e.g., Neural Radiance Fields) by distilling knowledge from a 2D diffusion model. To achieve plausible results, it frequently necessitates high Classifier-Free Guidance (CFG) weights [35, 47], which can further exacerbate these issues. However, standard SDS is often susceptible to visual artifacts such as over-saturation [37] and over-smoothing [23]. Moreover, the SDS objective itself, while empirically effective, does not strictly correspond to the gradient of a well-defined probability distribution of the 3D parameters [45, 1, 24], potentially leading to suboptimal optimization paths [17, 44, 29, 48]. To address these limitations, several variants have been proposed. For instance, methods like Variational Score Distillation (VSD) [45] and Classifier Score Distillation (CSD) [48] explore alternative gradient formulations to better approximate the optimization process from source distribution towards target distribution. Other approaches like Score Distillation via Inversion (SDI) [29] tries to better approximate the noise instead of using pure Gaussian noise. These variants can be understood through the lens of approximating an optimal transport path between the current image distribution (source) and the target natural image distribution, and from this perspective, a key difference between these methods lies in how they approximate the score of the source and target distributions [30]. For instance, SDS approximates it using the unconditional score, while VSD attempts a more direct approximation by fine-tuning a LoRA adapter on the current renderings. While these methods offer valuable contributions towards reducing the source distribution mismatch artifacts, they fundamentally rely on adapting gradients derived from pre-trained T2I models. This forces the optimization process to cope with score functions optimized for 2D image generation, which is inherently not optimal for tasks like 3D generation due to the domain gap and differences. Our work differs greatly from these approaches. We establish a novel theoretical connection, demonstrating that SDS can be precisely understood as a specific instantiation of the Schrödinger Bridge framework. This reformulation clarifies the underlying transport dynamics implicitly leveraged by SDS. Built upon this insight, we introduce a method that explicitly constructs and learns a more direct and stable optimization trajectory by framing the process as a tractable Schrödinger Bridge between the current rendering and an estimated text-aligned target, thereby enhancing both the fidelity and robustness of text-to-3D generation.
70
+
71
+ Diffusion Models and Schrödinger Bridges. Diffusion models (DMs) [12], also known as Score-based Generative Models (SGMs) [40, 42], have emerged as a dominant class of deep generative techniques, achieving state-of-the-art performance in synthesizing high-fidelity data across various domains, notably images [40, 12, 42, 9]. These models typically define a forward diffusion process, often formulated as a stochastic differential equation (SDE), that gradually corrupts data samples into a simple prior distribution, usually Gaussian noise. A neural network is then trained, often via score-matching objectives [16, 43, 42], to approximate the score function (gradient of the log density) of the perturbed data distributions. This learned score function parameterizes a reverse-time
72
+
73
+ ![](images/db4c4f42fdea5a83bc323d9568f8252996b7b568e7e821e9ea56defd29b4fd9e.jpg)
74
+ (a)
75
+
76
+ ![](images/189c14bce0a22661635fd5014e654d595e7f63de381c027f1b09bbbf57519285.jpg)
77
+ (b)
78
+ Figure 2: Left: Schrödinger Bridge Visualization and Samples. Top: Probability flow of the bridge from current rendering $(x_{\mathrm{rnd}})$ to the predicted target $(x_0^{\mathrm{pred}})$ distribution. Bottom: Corresponding image samples, showing the current rendering, intermediate bridge samples $(x_{t}^{i})$ , and the final predicted target. Right: Gradient and Intermediate Rendering Comparison. The first row shows TraCe gradients, the second shows SDS gradients, and the third shows rendered images of the 3D models that have not finished generation. Note the reduced artifacts and potentially more coherent structure in the TraCe gradients and intermediate renderings.
79
+
80
+ SDE that transforms samples from the prior back into data samples. While being extremely successful, this standard paradigm typically relies on initiating the generative process from unstructured noise. The Schrödinger Bridge problem provides a more general theoretical framework, originating from statistical physics [38, 39] and connected to entropy-regularized optimal transport [21, 5] and stochastic control [7, 34]. It aims to find the most likely stochastic evolution between two specified arbitrary distributions, $P_A$ and $P_B$ , rather than being restricted to a noise prior. This offers the potential to learn direct transformations between complex data manifolds. Attempts have been made to apply Schrödinger Bridge concepts to text-to-3D generation. For instance, [30] proposes a naive approach to direct Schrödinger Bridge formulation between current renderings and target images guided by text prompts, though this requires an initial stage involving standard SDS. Another approach, DreamFlow [20], proposes to approximate the backward Schrödinger Bridge dynamics between current renderings and target images by simply repurposing a fine-tuned text-to-image model, a heuristic potentially deviating from the true underlying Schrödinger Bridge process. We critically advance text-to-3D generation by establishing the precise theoretical relationship between SDS and Schrödinger Bridges. This foundational insight is then exploited to develop a principled methodology for direct distributional transport, enabling the construction of trajectories towards text-aligned target distributions.
81
+
82
+ # 3 Preliminaries
83
+
84
+ Score-based Generative Model (SGM) and Schrödinger Bridge. Score-based Generative Models (SGM) [40, 42] learn to generate data by reversing a predefined forward diffusion process. This process gradually transforms data $X_0 \sim p_A$ into noise $X_1 \approx \mathcal{N}(0, I)$ and is often governed by a forward stochastic differential equation (SDE). Generation then proceeds by simulating the corresponding reverse-time SDE [2], starting from $X_1$ and integrating backward to $t = 0$ . The forward and reverse SDEs are given by:
85
+
86
+ $$
87
+ \begin{array}{l} d X _ {t} = f _ {t} \left(X _ {t}\right) d t + g _ {t} d W _ {t} (\text {f o r w a r d}) \\ d X _ {t} = \left[ f _ {t} (X _ {t}) - g _ {t} ^ {2} \nabla_ {X _ {t}} \log p (X _ {t}, t) \right] d t + g _ {t} d \bar {W} _ {t} (\mathrm {b a c k w a r d}) \\ \end{array}
88
+ $$
89
+
90
+ Here, $W_{t}$ (and $\bar{W}_t$ ) is a standard Wiener process, and $g_{t}$ represents the time-dependent diffusion coefficient. The central part of this reversal is the score function $\nabla_{X_t}\log p(X_t,t)$ , which is unknown and approximated using a time-conditioned neural network $s_{\psi}(X_t,t)$ (or an equivalent noise predictor $\epsilon_{\psi}(X_t,t)$ ). This network is trained using score-matching objectives [43, 42] on pairs $(X_0,X_t)$ sampled from the forward process. Sampling is performed by numerically integrating the reverse SDE using solvers like DDPM [12] or DDIM [41].
91
+
92
+ The Schrödinger Bridge problem [39, 21] offers a generalization of SGMs to learn nonlinear diffusion processes between two arbitrary distributions, $X_0 \sim p_{\mathcal{A}}$ and $X_1 \sim p_{\mathcal{B}}$ . It seeks the most likely stochastic evolution connecting these boundary distributions, described by a pair of forward and backward SDEs:
93
+
94
+ $$
95
+ d X _ {t} = \left[ f _ {t} \left(X _ {t}\right) + \beta_ {t} \nabla \Psi \left(X _ {t}, t\right) \right] d t + \sqrt {\beta_ {t}} d W _ {t} (\text {f o r w a r d})
96
+ $$
97
+
98
+ $$
99
+ d X _ {t} = \left[ f _ {t} \left(X _ {t}\right) - \beta_ {t} \nabla \hat {\Psi} \left(X _ {t}, t\right) \right] d t + \sqrt {\beta_ {t}} d \bar {W} _ {t} (\text {b a c k w a r d})
100
+ $$
101
+
102
+ where $\Psi(x,t)$ and $\hat{\Psi}(x,t)$ are non-negative functions known as Schrödinger factors, determined by coupled partial differential equations with boundary conditions $\Psi(x,0)\hat{\Psi}(x,0) = p_A(x)$ and $\Psi(x,1)\hat{\Psi}(x,1) = p_B(x)$ . The forward and backward processes induce the same marginal density $q(x,t)$ at any time $t \in [0,1]$ , satisfying Nelson's duality $\Psi(x,t)\hat{\Psi}(x,t) = q(x,t)$ [33]. Notably, SGM is a special case where $p_B \approx \mathcal{N}(0,I)$ and $\Psi(x,t) \approx 1$ , causing the forward drift modification to vanish and $\hat{\Psi}(x,t) \approx q(x,t)$ , recovering the score function in the reverse SDE.
103
+
104
+ Score Distillation Sampling (SDS). Score Distillation Sampling (SDS) [35] enables generating 3D assets by leveraging powerful pre-trained 2D text-to-image diffusion models [36], bypassing the need for large-scale 3D datasets. It optimizes the parameters $\theta$ of a differentiable 3D representation, such as NeRF [31], InstantNGP [32], or 3D Gaussian Splitting (3DGS) [18], using gradients derived from the diffusion model. In this work, we adopt 3DGS primarily for its rapid generation capabilities and high-fidelity visual output.
105
+
106
+ The core mechanism of SDS involves repeatedly rendering the 3D model from different views $c(x = g(\theta, c))$ , adding noise to the rendering $x(t)$ , and using the 2D diffusion model's score estimate (denoising prediction $\epsilon_{\mathrm{pred}}$ ) to guide the optimization of $\theta$ . Formally, the gradient is computed as
107
+
108
+ $$
109
+ \nabla_ {\theta} \mathcal {L} _ {\mathrm {S D S}} (\theta) = \mathbb {E} _ {t, \epsilon , c} \left[ w (t) \left(\epsilon_ {\text {p r e d}} - \epsilon_ {\text {n o i s e}}\right) \frac {\partial x _ {\text {r m d r}}}{\partial \theta} \right] \tag {1}
110
+ $$
111
+
112
+ where $w(t)$ is a weighting factor and the term $(\epsilon_{\mathrm{pred}} - \epsilon_{\mathrm{noise}})$ provides the guidance signal. While SDS can be intuitively understood as moving renderings towards higher-density regions according to the 2D prior or formally interpreted via probability density distillation, the exact nature of its gradient signal is debated [17, 48, 44, 1, 45]. Practically, SDS often requires high classifier-free guidance (CFG) values, which can sometimes lead to artifacts like oversaturation or blur [17, 29, 46, 19, 37, 22]. Furthermore, the strategies that employ lower CFG values, for instance, methods explored in text-to-NeRF [45, 29], have demonstrated limitations when directly applied to the generation of 3D assets with Gaussian Splatting (Figure 4). Recent efforts such as LucidDreamer [23] have investigated text-to-3DGS under low CFG conditions; however, this direction currently faces trade-offs, including prolonged optimization durations (over 5000 iterations) and limitations in the attainable visual quality (Figure 4). Our work builds upon SDS by mitigating these issues through deriving a more direct and tractable optimization path, formulating Schrödinger Bridges to guide the generation process for achieving greater fidelity with lower CFG values.
113
+
114
+ # 4 Method
115
+
116
+ The preceding analysis of existing methods like Score Distillation Sampling (SDS) raise a natural question: can a principled framework be developed to define a direct, optimal transformation trajectory where both its source and target ends are explicitly and robustly aligned with the desired true distributions? Addressing this challenge—by establishing explicit control over the distributional endpoints of the generative trajectory, rather than relying on unstructured priors (e.g., a Gaussian noise)—is crucial for enhancing the fidelity and control of generative outcomes. To this end, we exploit the theoretical underpinnings of the Schrödinger Bridge problem, particularly its tractable formulations [26, 8], which provide a robust mechanism for learning direct, optimal transport paths between specified distributions. Our methodological contribution unfolds in two stages: first, we theoretically establish that standard SDS is indeed a special case of the Schrödinger Bridge framework, thereby providing a new perspective on its operation (Section 4.1). Second, building upon this insight, we propose a novel optimization algorithm grounded in tractable Schrödinger Bridge principles, to achieve improved distributional alignment throughout the generative process (Section 4.2).
117
+
118
+ ![](images/ee71589dbb401a1d27cdb136542cbfc59a5dc82b5cf53fb2b33c4d2d955bc18d.jpg)
119
+ Figure 3: Overview of Trajectory-Centric Distillation (TraCe). Our TraCe optimizes 3D parameters $\theta$ by computing a distillation gradient with a LoRA-adapted 2D diffusion model, $\epsilon_{\phi}$ . Given a text prompt $y$ and camera parameters $c$ , (1) the current 3D model is rendered in a random view to produce $x_{\mathrm{rndr}}$ . (2) An ideal target view $x_0^{\mathrm{pred}}$ is estimated from $x_{\mathrm{rndr}}$ using a pre-trained diffusion model $\epsilon_{\mathrm{pretrain}}$ via one-step denoising. (3) An intermediate latent $x_{t}$ is sampled from the analytic bridge posterior $q(x_{t} \mid x_{0}^{\mathrm{pred}}, x_{\mathrm{rndr}})$ at time $t$ . (4) The LoRA model $\epsilon_{\phi}$ predicts the noise for $x_{t}$ , and the difference between this prediction and the target noise is computed. (5) This difference directs the calculation of the TraCe gradient $\nabla_{\theta} \mathcal{L}_{\mathrm{TraCe}}$ , and drives the update of LoRA parameters $\phi$ .
120
+
121
+ # 4.1 Score Distillation Sampling as a Special Case of Schrödinger Bridges
122
+
123
+ In this section, we reformulate the SDS objective by examining its core guidance principles, and show it employs a simplified form of the backward dynamics found in the Schrödinger Bridge framework.
124
+
125
+ As established in Section 3, a Score-based Generative Model (SGM) aligns with a special configuration of the Schrödinger Bridge problem. This occurs when the Schrödinger Bridge's distribution $P_B$ at $t = 1$ is Gaussian noise $(P_B \sim \mathcal{N}(0, I))$ and its forward Schrödinger factor $\Psi(x, t) \approx 1$ . Under these conditions, the term $g_t^2 \nabla_{X_t} \log \Psi(X_t, t)$ in the forward Schrödinger Bridge SDE vanishes, causing the forward Schrödinger Bridge dynamics to become identical to the SGM's standard diffusion process. Consequently, the marginal densities $q(X_t, t)$ of this particular Schrödinger Bridge are equivalent to the SGM's noisy marginals $p(X_t, t)$ .
126
+
127
+ The crucial step in linking the Schrödinger Bridge and SGM reverse processes from a score perspective lies in Nelson's duality, $\Psi(X_{t}, t) \hat{\Psi}(X_{t}, t) = q(X_{t}, t)$ . Given $\Psi(X_{t}, t) \approx 1$ and $q(X_{t}, t) = p(X_{t}, t)$ for this specific Schrödinger Bridge, the duality simplifies to:
128
+
129
+ $$
130
+ 1 \cdot \hat {\Psi} (X _ {t}, t) \approx p (X _ {t}, t) \Longrightarrow \hat {\Psi} (X _ {t}, t) \approx p (X _ {t}, t) \tag {2}
131
+ $$
132
+
133
+ This directly implies that the score term in the general Schrödinger Bridge backward SDE, $-\nabla_{X_t}\log \hat{\Psi} (X_t,t)$ , becomes $-\nabla_{X_t}\log p(X_t,t)$ . This is precisely the score approximated by the learned network $s_{\psi}(X_t,t)$ (or its equivalent noise predictor $\epsilon (X_t,t)$ ) in an SGM.
134
+
135
+ SDS utilizes this learned score $s_{\psi}(X_t, t)$ from a pre-trained SGM to guide the optimization of a differentiable generator $g(\theta)$ . The update for $g(\theta)$ is fundamentally derived from $s_{\phi}(X_t, t)$ , aiming to make the generated samples $x_0 = g(\theta)$ consistent with the data manifold learned by the SGM.
136
+
137
+ Therefore, from a score gradient perspective:
138
+
139
+ - SDS operates using the score function $s_{\psi}(X_t, t)$ learned by an SGM.
140
+ - The derivation above shows that $s_{\psi}(X_t, t)$ (approximating $\nabla_{X_t} \log p(X_t, t)$ ) is equivalent to the score $-\nabla_{X_t} \log \hat{\Psi}(X_t, t)$ of a Schrödinger Bridge under the specific conditions that reduce the Schrödinger Bridge to an SGM.
141
+
142
+ Remark. In essence, SDS leverages a score gradient that is equivalent to the score function governing the reverse dynamics of the canonical Schrödinger Bridge implicit in any SGM. While general Schrödinger Bridges can offer more complex dynamics, SDS employs the score from this specific, simplified Schrödinger Bridge structure. Thus, the SDS mechanism represents an application of principles governing a special case of Schrödinger Bridges, distinguished by its reliance on the SGM-derived score $s_{\psi}$ .
143
+
144
+ # 4.2 Trajectory-Centric Distillation
145
+
146
+ To optimize the 3D model parameters $\theta$ such that current renderings $x_{\mathrm{rndr}} = g(\theta, c)$ align with a target text description $y$ , we propose a novel method, namely Trajectory-Centric Distillation (TraCe). This method leverages a 2D diffusion model, adapted with LoRA parameters $\phi$ denoted as $\epsilon_{\phi}$ , to provide a guiding gradient $\nabla_{\theta} \mathcal{L}_{\mathrm{TraCe}}(\theta)$ . The core idea is to conceptualize a diffusion bridge between the current rendering and an estimated ideal target image.
147
+
148
+ Constructing the Diffusion Bridge for Trajectory Guidance. At each optimization step for $\theta$ , we construct a specific diffusion bridge instance defined by two endpoints:
149
+
150
+ 1. Initial Bridge Endpoint $(X_{1} \gets x_{\mathrm{rnd}})$ : The current rendering $x_{\mathrm{rnd}} = g(\theta, c)$ serves as the starting point of the reverse diffusion trajectory we aim to learn. In the context of our bridge, this is treated as the "noisier" state at bridge time $t = 1$ .
151
+ 2. Target Bridge Endpoint $(X_0 \gets x_0^{\mathrm{pred}})$ : An estimated ideal target view $x_0^{\mathrm{pred}}$ acts as the desired endpoint at bridge time $t = 0$ . This target is dynamically obtained by performing one-step denoising on $x_{\mathrm{rndr}}$ using a pre-trained text-to-image model $\epsilon_{\mathrm{pretrain}}$ [20], conditioned on the text prompt $y$ : $x_0^{\mathrm{pred}} = (x_{\mathrm{rndr}} - \sqrt{1 - \bar{\alpha}_{t'}} \epsilon_{\mathrm{pretrain}}(x_{\mathrm{rndr}}, t', y)) / \sqrt{\bar{\alpha}_{t'}}$ , where $\bar{\alpha}_{t'}$ is from the noise schedule of $\epsilon_{\mathrm{pretrain}}$ .
152
+
153
+ With these two endpoints, $x_0^{\mathrm{pred}}$ and $x_{\mathrm{mdr}}$ , established, we then sample an intermediate latent state $x_t$ along the conceptual bridge. For a sampled time $t \in [0.02, 0.5]$ , following the tractable formulation of Schrödinger Bridges [26], $x_t$ is drawn from the analytically known conditional distribution $x_t \sim q(x_t | x_0^{\mathrm{pred}}, x_{\mathrm{mdr}}) = \mathcal{N}(x_t; \boldsymbol{\mu}_t, \Sigma_t I)$ , where the mean $\boldsymbol{\mu}_t = \gamma_t x_0^{\mathrm{pred}} + (1 - \gamma_t) x_{\mathrm{mdr}}$ is an interpolation between the target image and current rendering, and $\Sigma_t = \sigma_t^2 \bar{\sigma}_t^2 / (\sigma_t^2 + \bar{\sigma}_t^2)$ is the bridge variance. The coefficient $\gamma_t = \bar{\sigma}_t^2 / (\sigma_t^2 + \bar{\sigma}_t^2)$ , and $\sigma_t^2 = \int_0^t \beta_\tau d\tau$ , $\bar{\sigma}_t^2 = \int_t^1 \beta_\tau d\tau$ are accumulated variances from a noise schedule $\beta_t$ specific to this bridge construction. This $x_t$ represents a state on a direct trajectory from $x_0^{\mathrm{pred}}$ being progressively "noised" towards $x_{\mathrm{mdr}}$ (or equivalently, $x_{\mathrm{mdr}}$ being progressively "denoised" towards $x_0^{\mathrm{pred}}$ along this trajectory).
154
+
155
+ Optimizing $\theta$ via the Bridge Trajectory. We then optimize $\theta$ using the LoRA-adapted model $\epsilon_{\phi}(x_t, t, y, c)$ , which is trained to predict the noise that would take $x_t$ towards $x_0^{\mathrm{pred}}$ . The objective for $\theta$ utilizes $\epsilon_{\phi}$ to measure the consistency of $x_t$ with respect to $x_{\mathrm{rndr}}$ along this bridge:
156
+
157
+ $$
158
+ \nabla_ {\theta} \mathcal {L} _ {\mathrm {T r a C e}} (\theta) = \mathbb {E} _ {\epsilon , t, c} \left[ w (t) \left(\epsilon_ {\phi} \left(x _ {t}, t, y, c\right) - \frac {x _ {t} - x _ {\mathrm {r n d r}}}{\sigma_ {t}}\right) \left(\underbrace {\frac {\partial x _ {0} ^ {\mathrm {p r e d}} \left(x _ {\mathrm {r n d r}} , t , y\right)}{\partial x _ {t}}} _ {\text {U - n e t J a c o b i a n}} \frac {\partial x _ {t}}{\partial x _ {\mathrm {r n d r}}} + 1\right) \frac {\partial x _ {\mathrm {r n d r}}}{\partial \theta} \right] \tag {3}
159
+ $$
160
+
161
+ where $x_{\mathrm{mdr}} = g(\theta, c)$ , $t \sim \mathcal{U}[0.02, 0.5]$ , and $y$ is the text prompt. The term $x_{t}$ is sampled from $q(x_{t} \mid x_{0}^{\mathrm{pred}}, x_{\mathrm{mdr}})$ as defined above, and $\sigma_{t} = \sqrt{\int_{0}^{t} \beta_{\tau} d\tau}$ from the bridge's noise schedule. Following the convention of SDS, we omit the U-Net Jacobian term $\left( \frac{\partial x_{0}^{\mathrm{pred}}(\ldots)}{\partial x_{t}} \frac{\partial x_{t}}{\partial x_{\mathrm{mdr}}} + 1 \right)$ for effective training, as it can be treated as a learnable or constant factor absorbed by $w(t)$ . Thus, we have:
162
+
163
+ $$
164
+ \nabla_ {\theta} \mathcal {L} _ {\mathrm {T r a C e}} (\theta) = \mathbb {E} _ {\epsilon , t, c} \left[ w (t) \left(\epsilon_ {\phi} \left(x _ {t}, t, y, c\right) - \frac {x _ {t} - x _ {\mathrm {r n d r}}}{\sigma_ {t}}\right) \frac {\partial x _ {\mathrm {r n d r}}}{\partial \theta} \right] \tag {4}
165
+ $$
166
+
167
+ Scheduled $t$ -Sampling for Schrödinger Bridges Interpolation. For sampling the intermediate state $x_{t}$ in our TraCe objective (Eq. (4)), which dictates the characteristics of $x_{t} \sim q(x_{t} \mid x_{0}^{\mathrm{pred}}, x_{\mathrm{rndr}})$ , we adopt a $t$ -annealing strategy, similar to the approach proposed in [15]. Throughout the optimization of $\theta$ , the time parameter $t$ is progressively decreased from an initial value near 0.5 towards 0.02. This common annealing technique gradually shifts the focus of the Schrödinger Bridge interpolation from broader states towards those more proximate to the estimated ideal target $x_{0}^{\mathrm{pred}}$ , aiding the progressive refinement of the rendered output $g(\theta, c)$ .
168
+
169
+ # 5 Experiments
170
+
171
+ ![](images/f2f7fadbe165e6b2a42e2ccbeb7064cf223926c26368c6ad9d02b7c71ea5ced2.jpg)
172
+ a hermit crab with a colorful shell
173
+
174
+ ![](images/f83a6aea713e487388a11e440b019c1083ebf1f985c419df4eb683f014e8f11a.jpg)
175
+
176
+ ![](images/e35f58f244b9048c63af3998af4ddfb1a6b246c842ce0494eb1ab4a0279f9325.jpg)
177
+ a golden goblet, low poly
178
+
179
+ ![](images/fd89b1739055b76eee835aecbad1c8a9de324825d85f9cebcda1b487a15ab123.jpg)
180
+
181
+ ![](images/b23f59d631f6f846dbec7e5ab503d6ac13afe339ebb427f100d3e621a200aab0.jpg)
182
+ CSD
183
+ afox
184
+
185
+ ![](images/f0d3e0445746dc8782de9547098f243f6fb57756fdf790b7209e30a109901265.jpg)
186
+ Ours
187
+
188
+ ![](images/6259b3130f53a78ac23d7f031ac537b354684603bbec743a6121130a3decca0b.jpg)
189
+ CSD
190
+ A zoomed out DSLR photo of an amigurumi motorcycle
191
+
192
+ ![](images/1864e4a52846c72cbf396df40dd5a425801415616bd49cf589aef987f3ebccd4.jpg)
193
+ Ours
194
+
195
+ ![](images/fe8384a2cb5dfb905be6b82332dbeed52add618d2ea9fec3b6434bcbb7a879ce.jpg)
196
+ VSD
197
+ A car made out of sushi
198
+
199
+ ![](images/647bf2dbfceccf5d4f38badda6cdac57a8e1aa15ee42362c03e4db11d1a95da2.jpg)
200
+ Ours
201
+
202
+ ![](images/abafee86f309274977bcf4a00bdc063320dca5f91924f0d29c9db2291eaafe76.jpg)
203
+ VSD
204
+ A large, multi-layered, symmetrical wedding cake, with smooth fondant, delicate piping, and lifelike sugar flowers in full bloom, displayed on a silver stand.
205
+
206
+ ![](images/a949ec80cda0a064c99479fbbb2605eae7bf5a8033d4a1e997762a6c2e60ab53.jpg)
207
+ Ours
208
+
209
+ ![](images/f4283161b0bae0d5f69c3a87e706f70bb866e4154e9332d96563071194ad4206.jpg)
210
+ SDI
211
+ a blue lobster
212
+
213
+ ![](images/991068040d3a62dc5a87599353cd1d6e6522186984c7db68d839a6f7eaf129ea.jpg)
214
+ Ours
215
+
216
+ ![](images/37ef9b8c36fa44e4760a4d6139657a384365a663af4574f7c664b553e68dd952.jpg)
217
+ SDI
218
+ a roast turkey on a white platter
219
+
220
+ ![](images/0f8b82644c4ae4a2893233a35d268bb6725f8668ea2648c467c2d81e40dda859.jpg)
221
+
222
+ ![](images/f6dd1a8093da928a3a634594c17ac799cfa043090312a2d7327dd51c7d7834c9.jpg)
223
+ ISM
224
+
225
+ ![](images/254c5ff5273bd63447e78a805b1fa93381334ac3f7e71405595083a63a1d8408.jpg)
226
+ Ours
227
+ a delicious croissant
228
+
229
+ ![](images/e51b72b62ce0dcede92a87d0b18b735168ca54547dac1c409ca0fb955d70a5b9.jpg)
230
+ ISM
231
+
232
+ ![](images/0c400548ccef50083499af8802402bf0001ed51cda2d6a0cc0439a7ce765c7a9.jpg)
233
+ Ours
234
+ a violin
235
+ Figure 4: Qualitative comparisons. We present visual examples with the same text prompt.
236
+
237
+ Implementation Details. We choose recent state-of-the-art (SOTA) text-to-3D approaches for comparison: NeRF-based methods, such as Classifier Score Distillation (CSD) [48], ProlificDreamer (VSD) [45], and Score Distillation via Inversion (SDI) [29], and 3DGS-based methods like GaussianDreamer (SDS) [47] and LucidDreamer (ISM) [23]. Please see more details and experiments in Supplementary.
238
+
239
+ Qualitative Comparisons. Figure 4 presents visual results for several challenging text prompts. Our approach demonstrates the ability to generate higher quality 3D assets compared to other SOTA methods. Compared to SDI [29], our method yields significantly improved texture fidelity. Outputs from CSD [48] often exhibit a characteristic yellowish hue and a less realistic, cartoon-like appearance, which TraCe avoids, producing more natural color rendition and photorealism. When compared against VSD [45], our model better interprets complex textural and stylistic prompts, accurately capturing the text's message and generating a more coherent content. Contrasting with SDS [47], our results exhibit superior sharpness and finer details in both geometry and texture, leading to more visually appealing and realistic outputs. While ISM [23] can produce coherent structures, its outputs often exhibit a stylized, painterly quality; in contrast, our TraCe generates
240
+
241
+ results with enhanced photorealism and more natural material appearance. These results demonstrate our method's effectiveness in generating detailed and accurate 3D geometry and appearance from the given text descriptions.
242
+
243
+ Quantitative Comparison. We quantitatively evaluate our TraCe against other methods using 83 distinct prompts from Dreamfusion online gallery $^2$ with 120 views per prompt. We benchmark generation quality using CLIP Score (\%), GPTEval3D (Overall) (which leverages GPT-4o for evaluation), and ImageReward. CLIP Scores are evaluated with ViT-L/14, ViT-B/16, and ViT-B/32 backbones. We also assess computational efficiency via processing time (Time) and average peak VRAM (VRAM). As shown in Table 1, the proposed TraCe achieves state-of-the-art generation quality, securing top CLIP Scores across all ViT backbones, e.g., $69.2609 \pm 7.8366\%$ with ViT-L/14. Furthermore, TraCe demonstrates superior performance in advanced perception metrics, yielding the highest GPTEval3D score of 1028.03 and the most favorable (least negative) ImageReward score of $-0.2855 \pm 0.8909$ , indicating enhanced aesthetic quality and semantic alignment. With an average processing time of 14 minutes and an average peak VRAM usage of 18741 MiB, TraCe offers high-fidelity generation with a compelling balance of qualitative performance, computational efficiency, and memory footprint.
244
+
245
+ Table 1: Quantitative comparisons. Comparison of different methods on CLIP Score, GPTEval3D Score, ImageReward Score, running time, and VRAM usage. We report mean and standard deviation across 83 prompts and 120 views.
246
+
247
+ <table><tr><td rowspan="2">Method</td><td colspan="3">CLIP Score (%) ↑</td><td rowspan="2">GPTEval3D (Overall)↑</td><td rowspan="2">ImageReward↑</td><td rowspan="2">Time</td><td rowspan="2">VRAM</td></tr><tr><td>ViT-L/14</td><td>ViT-B/16</td><td>ViT-B/32</td></tr><tr><td>SDS [47]</td><td>68.6146±7.9134</td><td>27.7049±3.7004</td><td>27.5561±3.5893</td><td>1018.09</td><td>-0.4329±0.9125</td><td>10min</td><td>18147MiB</td></tr><tr><td>CSD [48]</td><td>68.0282±7.5093</td><td>27.0886±3.7342</td><td>26.5844±3.8703</td><td>983.04</td><td>-0.6715±0.7482</td><td>11min</td><td>19804MiB</td></tr><tr><td>VSD [45]</td><td>67.2697±8.5573</td><td>27.0749±3.9675</td><td>26.9722±3.9563</td><td>1007.49</td><td>-0.5330±0.8927</td><td>17min</td><td>26473MiB</td></tr><tr><td>ISM [23]</td><td>69.0093±10.2400</td><td>27.5460±3.6817</td><td>26.9822±3.5495</td><td>1012.37</td><td>-0.3904±0.9503</td><td>20min</td><td>10151MiB</td></tr><tr><td>SDI [29]</td><td>63.0409±11.7841</td><td>25.6487±5.2540</td><td>25.5421±5.0903</td><td>971.98</td><td>-0.8334±1.0391</td><td>10min</td><td>16011MiB</td></tr><tr><td>TraCe</td><td>69.2609±7.8366</td><td>27.9334±3.7382</td><td>27.7049±3.8671</td><td>1028.03</td><td>-0.2855±0.8909</td><td>14min</td><td>18741MiB</td></tr></table>
248
+
249
+ ![](images/c0ec5104e7345623164a473106d11cdeb43e5dfbb0ff5276672973b33cc7e2af.jpg)
250
+ VSD
251
+
252
+ ![](images/20c20add59cd046da657948538d615b824294ca74363ad21474aa92f16b007ce.jpg)
253
+ CSD
254
+
255
+ ![](images/118b811a53661eaa06f615e83f1b0e0277086f9900b987ae06d9188e4d9d377c.jpg)
256
+ Naive bridge
257
+
258
+ ![](images/e98b05a3f89d0e8bb9b27f78f6464ce61b5271b19de2c0000ce1f03da2fe1051.jpg)
259
+ w/o LoRA &
260
+ Scheduled t-Sampling
261
+
262
+ ![](images/ac8ce56deef8adf1866fd86341afa04bd093a54b8c428b90a16c0bef580a624c.jpg)
263
+ Ours w/o
264
+ Scheduled t-Sampling
265
+ Figure 5: Ablation study on our framework.
266
+
267
+ ![](images/261aad9b2e3c84a17fdbe36bc208f30e70f0cfeef76fcfa791e5cfa4f645a7a8.jpg)
268
+ Ours
269
+
270
+ Ablation Study. Figure 5 showcases the ablation study of our TraCe on a fox generation. VSD [45] and CSD [48] exhibit less-desired generation (e.g., missing details). The third column illustrates a naive Schrödinger Bridge approach [30] which attempts to bridge distributions defined by source and target prompts and results in a comparatively smoother, less detailed rendering. The fourth column shows TraCe without LoRA adaptation and without our scheduled $t$ -sampling, where noticeable artifacts such as blue hues on the fur are apparent. Introducing LoRA but omitting the scheduled $t$ -sampling (fifth column) mitigates some artifacts, yet color inconsistencies persist. Finally, our full TraCe method ("Ours")—supported by LoRA-adapted learning of its specific score dynamics and an annealed $t$ -sampling schedule—generates significantly higher-fidelity details in the fur and tail, boosting overall realism compared to other methods (VSD, CSD) and ablated versions. These results highlight the role of our core Schrödinger Bridge formulation: it achieves superior final quality when augmented with these tailored learning components.
271
+
272
+ Table 2: ImageReward ablation over LoRA and scheduled $t$ -sampling.
273
+
274
+ <table><tr><td>Method Configuration</td><td>ImageReward (↑)</td></tr><tr><td>LoRA off &amp; scheduled t-sampling off</td><td>-0.4488 ± 0.9964</td></tr><tr><td>LoRA off &amp; scheduled t-sampling on</td><td>-0.3389 ± 0.9721</td></tr><tr><td>LoRA on &amp; scheduled t-sampling off</td><td>-0.4020 ± 1.0019</td></tr><tr><td>LoRA on &amp; scheduled t-sampling on (ours)</td><td>-0.2486 ± 0.8909</td></tr></table>
275
+
276
+ We perform an ablation study on our key components, LoRA adaptation and scheduled $t$ -sampling, measuring quality with ImageReward (Table 2). Our full method (-0.2486) significantly outperforms the baseline (both off: -0.4488), as well as enabling only LoRA (-0.4020) or only scheduled $t$ -sampling (-0.3389). The results confirm both components are crucial and demonstrate their strong synergistic effect.
277
+
278
+ CFG value. We investigate the impact of the CFG value on our TraCe, as illustrated in Figure 6 with two example objects. While very low CFG values (e.g., 5) yield reduced visual fidelity, TraCe produces high-quality, well-defined results starting at a CFG of approximately 15-20. The visual outcomes are stable and robust within the CFG 15-20 range. Beyond this, at higher CFG values (25-100), results remain largely consistent with minimal further improvement. This demonstrates TraCe's capability to effectively generate high-quality 3D assets at relatively low and stable CFG settings. Furthermore, TraCe's enhanced visual quality is complemented by its robust CLIP score performance within a moderate CFG range (e.g., 10-30) relative to other compared methods, as detailed in Figure ??
279
+
280
+ ![](images/59c79348a0dd182fe973676088afac3883262ed6f240982dd7ac4c38b083ce02.jpg)
281
+ Figure 6: Different CFG value and generated 3D assets. Prompts are "an overstuffed pastrami sandwich" (top row), "a car made out of sushi" (bottom row).
282
+
283
+ # 6 Conclusion
284
+
285
+ We introduce Trajectory-Centric Distillation (TraCe), a novel text-to-3D generation framework. Our approach is rooted in a new theoretical understanding of SDS as a specific instance of the Schrödinger Bridge problem. The proposed TraCe explicitly constructs and learns a direct diffusion bridge between current renderings and text-conditioned targets, employing a LoRA-adapted diffusion model to accurately model the bridge's score dynamics. Comprehensive experiments demonstrate TraCe's state-of-the-art performance, yielding 3D assets with superior visual quality and fidelity, notably at lower and more stable Classifier-Free Guidance values than prior methods. These results underscore the benefits of our principled, direct optimization trajectory. We believe TraCe will offer new insights for text-to-3D generation, in terms of efficient and robust trajectory learning for generative models.
286
+
287
+ # 7 Acknowledgments
288
+
289
+ This work was supported in part by the National Science Foundation of China under Grants (62472375), and in part by the Major Program of National Natural Science Foundation of Zhejiang (LD24F020014, LD25F020002), and in part by the Zhejiang Pioneer (Jianbing) Project (2024C01032), and in part by the Ningbo Yongjiang Talent Programme(2023A-198-G).
290
+
291
+ # References
292
+
293
+ [1] Thiemo Alldieck, Nikos Kolotouros, and Cristian Sminchisescu. Score distillation sampling with learned manifold corrective. In European Conference on Computer Vision, pages 1-18. Springer, 2024.
294
+ [2] Brian DO Anderson. Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12(3):313-326, 1982.
295
+ [3] Yuanhao Cai, He Zhang, Kai Zhang, Yixun Liang, Mengwei Ren, Fujun Luan, Qing Liu, Soo Ye Kim, Jianming Zhang, Zhifei Zhang, et al. Baking gaussian splatting into diffusion denoiser for fast and scalable single-stage image-to-3d generation. arXiv preprint arXiv:2411.14384, 2024.
296
+ [4] Rui Chen, Yongwei Chen, Ningxin Jiao, and Kui Jia. *Fantasia3d: Disentangling geometry* and appearance for high-quality text-to-3d content creation. In *Proceedings of the IEEE/CVF international conference on computer vision*, pages 22246–22256, 2023.
297
+ [5] Yongxin Chen, Tryphon T Georgiou, and Michele Pavon. Stochastic control liaisons: Richard sinkhorn meets gaspard monge on a schrödinger bridge. Siam Review, 63(2):249-313, 2021.
298
+ [6] Zilong Chen, Feng Wang, Yikai Wang, and Huaping Liu. Text-to-3d using gaussian splatting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 21401-21412, 2024.
299
+ [7] Paolo Dai Pra. A stochastic control approach to reciprocal diffusion processes. Applied mathematics and Optimization, 23(1):313-329, 1991.
300
+ [8] Valentin De Bortoli, James Thornton, Jeremy Heng, and Arnaud Doucet. Diffusion schrödinger bridge with applications to score-based generative modeling. Advances in Neural Information Processing Systems, 34:17695-17709, 2021.
301
+ [9] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780-8794, 2021.
302
+ [10] Rao Fu, Xiao Zhan, Yiwen Chen, Daniel Ritchie, and Srinath Sridhar. Shapecrafter: A recursive text-conditioned 3d shape generation model. Advances in Neural Information Processing Systems, 35:8882-8895, 2022.
303
+ [11] Amir Hertz, Kfir Aberman, and Daniel Cohen-Or. Delta denoising score. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2328–2337, 2023.
304
+ [12] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020.
305
+ [13] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022.
306
+ [14] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuzhhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. *ICLR*, 1(2):3, 2022.
307
+ [15] Yukun Huang, Jianan Wang, Yukai Shi, Boshi Tang, Xianbiao Qi, and Lei Zhang. Dreamtime: An improved optimization strategy for diffusion-guided 3d generation. arXiv preprint arXiv:2306.12422, 2023.
308
+ [16] Aapo Hyvarinen and Peter Dayan. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4), 2005.
309
+ [17] Oren Katzir, Or Patashnik, Daniel Cohen-Or, and Dani Lischinski. Noise-free score distillation. arXiv preprint arXiv:2310.17590, 2023.
310
+ [18] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 42(4):139-1, 2023.
311
+
312
+ [19] Inhee Lee, Byungjun Kim, and Hanbyul Joo. Guess the unseen: Dynamic 3d scene reconstruction from partial 2d glimpses. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1062-1071, 2024.
313
+ [20] Kyungmin Lee, Kihyuk Sohn, and Jinwoo Shin. Dreamflow: High-quality text-to-3d generation by approximating probability flow. arXiv preprint arXiv:2403.14966, 2024.
314
+ [21] Christian Léonard. A survey of the schrödinger problem and some of its connections with optimal transport. arXiv preprint arXiv:1308.0215, 2013.
315
+ [22] Zongrui Li, Minghui Hu, Qian Zheng, and Xudong Jiang. Connecting consistency distillation to score distillation for text-to-3d generation. In European Conference on Computer Vision, pages 274–291. Springer, 2024.
316
+ [23] Yixun Liang, Xin Yang, Jiantao Lin, Haodong Li, Xiaogang Xu, and Yingcong Chen. Luciddreamer: Towards high-fidelity text-to-3d generation via interval score matching. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6517–6526, 2024.
317
+ [24] Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3d: High-resolution text-to-3d content creation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 300–309, 2023.
318
+ [25] Chenguo Lin, Panwang Pan, Bangbang Yang, Zeming Li, and Yadong Mu. Diffsplat: Repurposing image diffusion models for scalable gaussian splat generation. arXiv preprint arXiv:2501.16764, 2025.
319
+ [26] Guan-Horng Liu, Arash Vahdat, De-An Huang, Evangelos A Theodorou, Weili Nie, and Anima Anandkumar. I²sb: Image-to-image schrödinger bridge. arXiv preprint arXiv:2302.05872, 2023.
320
+ [27] Jian Liu, Xiaoshui Huang, Tianyu Huang, Lu Chen, Yuenan Hou, Shixiang Tang, Ziwei Liu, Wanli Ouyang, Wangmeng Zuo, Junjun Jiang, et al. A comprehensive survey on 3d content generation. arXiv preprint arXiv:2402.01166, 2024.
321
+ [28] Qihao Liu, Yi Zhang, Song Bai, Adam Kortylewski, and Alan Yuille. Direct-3d: Learning direct text-to-3d generation on massive noisy 3d data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6881–6891, 2024.
322
+ [29] Artem Lukoianov, Haitz Sáez de Océriz Borde, Kristjan Greenewald, Vitor Guizilini, Timur Bagautdinov, Vincent Sitzmann, and Justin M Solomon. Score distillation via reparametrized ddim. Advances in Neural Information Processing Systems, 37:26011-26044, 2024.
323
+ [30] David McAllister, Songwei Ge, Jia-Bin Huang, David Jacobs, Alexei Efros, Aleksander Holynski, and Angjoo Kanazawa. Rethinking score distillation as a bridge between image distributions. Advances in Neural Information Processing Systems, 37:33779-33804, 2024.
324
+ [31] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021.
325
+ [32] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM transactions on graphics (TOG), 41(4):1-15, 2022.
326
+ [33] Edward Nelson. Dynamical theories of Brownian motion, volume 106. Princeton university press, 2020.
327
+ [34] Michele Pavon and Anton Wakolbinger. On free energy, stochastic control, and schrödinger processes. In Modeling, Estimation and Control of Systems with Uncertainty: Proceedings of a Conference held in Sopron, Hungary, September 1990, pages 334-348. Springer, 1991.
328
+
329
+ [35] Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988, 2022.
330
+ [36] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022.
331
+ [37] Seyedmorteza Sadat, Otmar Hilliges, and Romann M Weber. Eliminating oversaturation and artifacts of high guidance scales in diffusion models. In The Thirteenth International Conference on Learning Representations, 2024.
332
+ [38] Erwin Schrödinger. Über die umkehrung der natürgesetze. Verlag der Akademie der Wissenschaften in Kommission bei Walter De Gruyter u ..., 1931.
333
+ [39] Erwin Schrödinger. Sur la théorie relativiste de l'électron et l'interprétable de la mécanique quantique. In Annales de l'institut Henri Poincaré, volume 2, pages 269-310, 1932.
334
+ [40] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, pages 2256-2265. pmlr, 2015.
335
+ [41] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020.
336
+ [42] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020.
337
+ [43] Pascal Vincent. A connection between score matching and denoising autoencoders. Neural computation, 23(7):1661-1674, 2011.
338
+ [44] Peihao Wang, Zhiwen Fan, Dejia Xu, Dilin Wang, Sreyas Mohan, Forrest Iandola, Rakesh Ranjan, Yilei Li, Qiang Liu, Zhangyang Wang, et al. Steindreamer: Variance reduction for text-to-3d score distillation via stein identity. arXiv preprint arXiv:2401.00604, 2023.
339
+ [45] Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu. Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation. Advances in Neural Information Processing Systems, 36:8406–8441, 2023.
340
+ [46] Min Wei, Jingkai Zhou, Junyao Sun, and Xuesong Zhang. Adversarial score distillation: when score distillation meets gan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8131-8141, 2024.
341
+ [47] Taoran Yi, Jiemin Fang, Junjie Wang, Guanjun Wu, Lingxi Xie, Xiaopeng Zhang, Wenyu Liu, Qi Tian, and Xinggang Wang. Gaussian dreamer: Fast generation from text to 3d gaussians by bridging 2d and 3d diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6796-6807, 2024.
342
+ [48] Xin Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Song-Hai Zhang, and Xiaojuan Qi. Text-to-3d with classifier score distillation. arXiv preprint arXiv:2310.19415, 2023.
343
+ [49] Junzhe Zhu, Peiye Zhuang, and Sanmi Koyejo. Hifa: High-fidelity text-to-3d generation with advanced diffusion guidance. arXiv preprint arXiv:2305.18766, 2023.
344
+
345
+ # NeurIPS Paper Checklist
346
+
347
+ The checklist is designed to encourage best practices for responsible machine learning research, addressing issues of reproducibility, transparency, research ethics, and societal impact. Do not remove the checklist: The papers not including the checklist will be desk rejected. The checklist should follow the references and follow the (optional) supplemental material. The checklist does NOT count towards the page limit.
348
+
349
+ Please read the checklist guidelines carefully for information on how to answer these questions. For each question in the checklist:
350
+
351
+ - You should answer [Yes], [No], or [NA].
352
+ - [NA] means either that the question is Not Applicable for that particular paper or the relevant information is Not Available.
353
+ - Please provide a short (1–2 sentence) justification right after your answer (even for NA).
354
+
355
+ The checklist answers are an integral part of your paper submission. They are visible to the reviewers, area chairs, senior area chairs, and ethics reviewers. You will be asked to also include it (after eventual revisions) with the final version of your paper, and its final version will be published with the paper.
356
+
357
+ The reviewers of your paper will be asked to use the checklist as one of the factors in their evaluation. While "[Yes]" is generally preferable to "[No]", it is perfectly acceptable to answer "[No]" provided a proper justification is given (e.g., "error bars are not reported because it would be too computationally expensive" or "we were unable to find the license for the dataset we used"). In general, answering "[No]" or "[NA]" is not grounds for rejection. While the questions are phrased in a binary way, we acknowledge that the true answer is often more nuanced, so please just use your best judgment and write a justification to elaborate. All supporting evidence can appear either in the main paper or the supplemental material, provided in appendix. If you answer [Yes] to a question, in the justification please point to the section(s) where related material for the question can be found.
358
+
359
+ IMPORTANT, please:
360
+
361
+ - Delete this instruction block, but keep the section heading "NeurIPS Paper Checklist".
362
+ - Keep the checklist subsection headings, questions/answers and guidelines below.
363
+ - Do not modify the questions and only use the provided macros for your answers.
364
+
365
+ # 1. Claims
366
+
367
+ Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
368
+
369
+ Answer: [Yes]
370
+
371
+ Justification: We establish a novel theoretical connection, demonstrating that SDS can be precisely understood as a specific case of the Schrödinger Bridge framework. Experiments demonstrate that our TraCe achieves high-quality 3D generation, surpassing current state-of-the-art techniques. See the abstract and the end of Section 1.
372
+
373
+ Guidelines:
374
+
375
+ - The answer NA means that the abstract and introduction do not include the claims made in the paper.
376
+ - The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.
377
+ - The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
378
+ - It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
379
+
380
+ # 2. Limitations
381
+
382
+ Question: Does the paper discuss the limitations of the work performed by the authors?
383
+
384
+ Answer: [Yes]
385
+
386
+ Justification: See Supplementary.
387
+
388
+ Guidelines:
389
+
390
+ - The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.
391
+ - The authors are encouraged to create a separate "Limitations" section in their paper.
392
+ - The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.
393
+ - The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.
394
+ - The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.
395
+ - The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.
396
+ - If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.
397
+ - While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.
398
+
399
+ # 3. Theory assumptions and proofs
400
+
401
+ Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
402
+
403
+ Answer: [Yes]
404
+
405
+ Justification: See Section 4.
406
+
407
+ Guidelines:
408
+
409
+ - The answer NA means that the paper does not include theoretical results.
410
+ - All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.
411
+ - All assumptions should be clearly stated or referenced in the statement of any theorems.
412
+ - The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.
413
+ - Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.
414
+ - Theorems and Lemmas that the proof relies upon should be properly referenced.
415
+
416
+ # 4. Experimental result reproducibility
417
+
418
+ Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
419
+
420
+ Answer: [Yes]
421
+
422
+ Justification: We disclose the experimental settings to reproduce the main experimental results in our paper in Supplementary and the settings of all compared methods in Section 5.
423
+
424
+ # Guidelines:
425
+
426
+ - The answer NA means that the paper does not include experiments.
427
+ - If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.
428
+ - If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.
429
+ - Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.
430
+ - While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example
431
+ (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.
432
+ (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.
433
+ (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).
434
+ (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.
435
+
436
+ # 5. Open access to data and code
437
+
438
+ Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
439
+
440
+ Answer: [Yes]
441
+
442
+ Justification: Our code will be released to the community.
443
+
444
+ Guidelines:
445
+
446
+ - The answer NA means that paper does not include experiments requiring code.
447
+ - Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
448
+ - While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).
449
+ - The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
450
+ - The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.
451
+ - The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.
452
+ - At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).
453
+
454
+ - Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.
455
+
456
+ # 6. Experimental setting/details
457
+
458
+ Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
459
+
460
+ Answer: [Yes]
461
+
462
+ Justification: We provide the optimization and train/test details of our proposed method in Supplementary.
463
+
464
+ Guidelines:
465
+
466
+ - The answer NA means that the paper does not include experiments.
467
+ - The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.
468
+ - The full details can be provided either with the code, in appendix, or as supplemental material.
469
+
470
+ # 7. Experiment statistical significance
471
+
472
+ Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
473
+
474
+ Answer: [Yes]
475
+
476
+ Justification: Ours reports the results of multiple rounds of the experiment, reflecting the statistics of the experiments.
477
+
478
+ Guidelines:
479
+
480
+ - The answer NA means that the paper does not include experiments.
481
+ - The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.
482
+ - The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).
483
+ - The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)
484
+ - The assumptions made should be given (e.g., Normally distributed errors).
485
+ - It should be clear whether the error bar is the standard deviation or the standard error of the mean.
486
+ - It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified.
487
+ - For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).
488
+ - If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.
489
+
490
+ # 8. Experiments compute resources
491
+
492
+ Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
493
+
494
+ Answer: [Yes]
495
+
496
+ Justification: See Section 5.
497
+
498
+ Guidelines:
499
+
500
+ - The answer NA means that the paper does not include experiments.
501
+ - The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.
502
+
503
+ - The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.
504
+ - The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper).
505
+
506
+ # 9. Code of ethics
507
+
508
+ Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
509
+
510
+ Answer: [Yes]
511
+
512
+ Justification: See Supplementary.
513
+
514
+ Guidelines:
515
+
516
+ - The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
517
+ - If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.
518
+ - The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).
519
+
520
+ # 10. Broader impacts
521
+
522
+ Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
523
+
524
+ Answer: [Yes]
525
+
526
+ Justification: See Supplementary.
527
+
528
+ Guidelines:
529
+
530
+ - The answer NA means that there is no societal impact of the work performed.
531
+ - If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.
532
+ - Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.
533
+ - The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.
534
+ - The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.
535
+ - If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).
536
+
537
+ # 11. Safeguards
538
+
539
+ Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
540
+
541
+ Answer: [NA]
542
+
543
+ Justification: Our paper poses no such risks.
544
+
545
+ Guidelines:
546
+
547
+ - The answer NA means that the paper poses no such risks.
548
+
549
+ - Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.
550
+ - Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.
551
+ - We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.
552
+
553
+ # 12. Licenses for existing assets
554
+
555
+ Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
556
+
557
+ Answer: [Yes]
558
+
559
+ Justification: The assets used in the paper are properly credited, and we respect the license and terms of use of these assets throughout our research procedures.
560
+
561
+ Guidelines:
562
+
563
+ - The answer NA means that the paper does not use existing assets.
564
+ - The authors should cite the original paper that produced the code package or dataset.
565
+ - The authors should state which version of the asset is used and, if possible, include a URL.
566
+ - The name of the license (e.g., CC-BY 4.0) should be included for each asset.
567
+ - For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.
568
+ - If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.
569
+ - For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.
570
+ - If this information is not available online, the authors are encouraged to reach out to the asset's creators.
571
+
572
+ # 13. New assets
573
+
574
+ Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
575
+
576
+ Answer: [NA]
577
+
578
+ Justification: Our paper does not release new assets.
579
+
580
+ Guidelines:
581
+
582
+ - The answer NA means that the paper does not release new assets.
583
+ - Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.
584
+ - The paper should discuss whether and how consent was obtained from people whose asset is used.
585
+ - At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.
586
+
587
+ # 14. Crowdsourcing and research with human subjects
588
+
589
+ Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
590
+
591
+ Answer: [Yes]
592
+
593
+ Justification: See Supplementary.
594
+
595
+ Guidelines:
596
+
597
+ - The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
598
+ - Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.
599
+ - According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.
600
+
601
+ # 15. Institutional review board (IRB) approvals or equivalent for research with human subjects
602
+
603
+ Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
604
+
605
+ Answer: [Yes]
606
+
607
+ Justification: See Supplementary.
608
+
609
+ Guidelines:
610
+
611
+ - The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
612
+ - Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.
613
+ - We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.
614
+ - For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
615
+
616
+ # 16. Declaration of LLM usage
617
+
618
+ Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigourousness, or originality of the research, declaration is not required.
619
+
620
+ Answer: [NA]
621
+
622
+ Justification: Our core method development in this research does not involve LLMs as any important, original, or non-standard components.
623
+
624
+ Guidelines:
625
+
626
+ - The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components.
627
+ - Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described.
NeurIPS/2025/Walking the Schrödinger Bridge_ A Direct Trajectory for Text-to-3D Generation/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2044dcae97c5eeb81ec52f3ff33cfa830f0590d36a0849743971eeda9f6aee4
3
+ size 530295
NeurIPS/2025/Walking the Schrödinger Bridge_ A Direct Trajectory for Text-to-3D Generation/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:949f579393db86d38fa6f824a0e4ffac4e8dda0fa93ddcd1d26b425fc9898d45
3
+ size 764093
NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/ac5eb2f1-66f9-4927-a844-c392b6834faf_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9987a5fcffc39ef3ae07a92a69a8e52e7a1d41d1a6ff6fc1ed70ee380cbae3f
3
+ size 164499
NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/ac5eb2f1-66f9-4927-a844-c392b6834faf_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f236bdec0e2b9d15064749b8b58d1fa810ae24425be12b1ad89b346044c416d
3
+ size 216997
NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/ac5eb2f1-66f9-4927-a844-c392b6834faf_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16630756521389aecdbebddd3ee56c82965e670940915787f8391e3836ff7dc5
3
+ size 7298285
NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/full.md ADDED
The diff for this file is too large to render. See raw diff
 
NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30b7f0fffa240731cc8e4951a8e67ae3abe283c7323c827af3823bd246198197
3
+ size 517318
NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2ab7a14104ccaab8711c42fb6f4df374aa299e7a7e5e7eb9c17826b887b4dfc
3
+ size 825376
NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/52c771b2-b6d2-47fa-9bec-199f0a35100b_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0375c511446779bc3805dda20636e6ee0f1faa9f57482b5b76a4d7318f853998
3
+ size 183293
NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/52c771b2-b6d2-47fa-9bec-199f0a35100b_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7c2dbbaaa2175b7ef46d8f1906511aaf5af788ecd058ebf3a8bea21c2f1cfd1
3
+ size 232193
NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/52c771b2-b6d2-47fa-9bec-199f0a35100b_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:919e63d1c33e3e70ba89ef55534dfb31b3bda266f3b1de5f2655b49bdf436fab
3
+ size 11015567
NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/full.md ADDED
The diff for this file is too large to render. See raw diff
 
NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76e74f755b5dc067bb192ba8f7105b096ef2e53feb3dbb756cd34f974196b503
3
+ size 1795536
NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17829ca25f5c08ade8f1b565d7fae4a5456d082dcf6b025e18a903a5686e965c
3
+ size 963649
NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/af368978-d2c7-4003-9396-5a0b96abab81_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6add4ff66ab868be2600d75e8fbb9601e6e560d0fe0c6d7399ae753428bc858
3
+ size 137780
NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/af368978-d2c7-4003-9396-5a0b96abab81_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1251dd91f70a3f714b2aa2712848191bc90d9bdde49a23f9a1212d916f92e3d9
3
+ size 180930
NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/af368978-d2c7-4003-9396-5a0b96abab81_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ae2dec9695e8678dc64297a7aef0691d372829f73c7a5b9baedfcf304cd5456
3
+ size 14763992
NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/full.md ADDED
@@ -0,0 +1,684 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WarpGAN: Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting
2
+
3
+ Kaitao Huang $^{1}$ , Yan Yan $^{1\dagger}$ , Jing-Hao Xue $^{2}$ , Hanzi Wang $^{1}$
4
+
5
+ <sup>1</sup>Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, P.R. China
6
+
7
+ $^{2}$ Department of Statistical Science, University College London, UK huangkt@stu.xmu.edu.cn, yanyan@xmu.edu.cn
8
+
9
+ ![](images/e2d4706c632373f63fdbf03bc2cbc1c7999af6cb2e0f81de3c514b1c5809ed1d.jpg)
10
+ Figure 1: Visual examples. Given a single input image (the first row), our WarpGAN synthesizes images from five novel views: front, right, left, top, and down (the second to the sixth rows).
11
+
12
+ # Abstract
13
+
14
+ 3D GAN inversion projects a single image into the latent space of a pre-trained 3D GAN to achieve single-shot novel view synthesis, which requires visible regions with high fidelity and occluded regions with realism and multi-view consistency. However, existing methods focus on the reconstruction of visible regions, while the generation of occluded regions relies only on the generative prior of 3D GAN. As a result, the generated occluded regions often exhibit poor quality due to the information loss caused by the low bit-rate latent code. To address this, we introduce the warping-and-inpainting strategy to incorporate image inpainting into 3D GAN inversion and propose a novel 3D GAN inversion method, WarpGAN. Specifically, we first employ a 3D GAN inversion encoder to project the single-view image into a latent code that serves as the input to 3D GAN. Then, we perform warping to a novel view using the depth map generated by 3D GAN. Finally, we develop a novel SVINet, which leverages the symmetry prior and multi-view image correspondence w.r.t. the same latent code to perform inpainting of occluded regions in the warped image. Quantitative and qualitative experiments demonstrate that our method consistently outperforms several state-of-the-art methods.
15
+
16
+ # 1 Introduction
17
+
18
+ GANs [13] have made remarkable progress in synthesizing unconditional images. In particular, StyleGAN [20, 21] has achieved photorealistic quality on high-resolution images. Several extensions [15, 31, 36] leverage the latent space (i.e., the $\mathcal{W}$ space) to control semantic attributes (e.g., expression and age). However, these 2D GANs suffer from inferior control over geometrical aspects of generated images, leading to multi-view inconsistency for viewpoint manipulation.
19
+
20
+ Recently, with the development of neural radiance fields (NeRF) [27] in novel view synthesis (NVS), a variety of 3D GANs [2, 5, 6, 14, 29, 39, 41] have been proposed to integrate NeRF into style-based generation, resulting in remarkable success in generating highly realistic images. Based on it, 3D GAN inversion methods project a single image into the latent space of a pre-trained 3D GAN generator, obtaining a latent code. Hence, the viewpoint of the input image can be changed by altering the camera pose, and the image attributes can be easily edited by modifying the latent code. Unlike 2D GAN inversion, 3D GAN inversion aims to generate images that maintain both the faithfulness of the input view and the high quality of the novel views.
21
+
22
+ On the one hand, existing 3D GAN inversion methods rely only on the generative prior of 3D GANs for generating the occluded regions (i.e., the invisible regions in the input image) in the novel viewpoint, resulting in unfaithful reconstruction of occluded regions in complex scenarios. On the other hand, for 3D scene generation, several recent methods adopt a warping-and-inpainting strategy. They [11, 30, 35] first predict a depth map of a given image, and then warp the input image to novel camera viewpoints with the depth-based correspondence, followed by a 2D inpainting network to synthesize high-fidelity occluded regions of the warped images.
23
+
24
+ To address the inferior reconstruction capability of occluded regions in existing 3D GAN inversion methods, motivated by the success of the warping-and-inpainting strategy in 3D scene generation, we introduce image inpainting into 3D GAN inversion. Unfortunately, 3D GAN inversion is dedicated to training with single-view datasets, while the above 3D scene generation methods usually require multi-view datasets for training. This leads to two issues: (1) multi-view inconsistency due to the lack of 3D information (i.e., the real novel view image) to guide the inpainting process; (2) the unavailability of ground-truth images from novel views to compute the loss during model training.
25
+
26
+ In this paper, we propose a novel 3D GAN inversion method, WarpGAN, by integrating the warping-and-inpainting strategy into 3D GAN inversion. Specifically, we first train a 3D GAN inversion encoder, which projects the input image into a latent code $w^{+}$ (located in the latent space $\mathcal{W}^{+}$ of the 3D GAN generator). By feeding $w^{+}$ into 3D GAN, we compute the depth map of the input image for geometric warping and perform an initial filling of the occluded regions in the warped image. Subsequently, leveraging the symmetry prior [43, 45] and multi-view image correspondence w.r.t. the same latent code in 3D GANs, we train a style-based novel view inpainting network (SVINet). It can inpaint the occluded regions in the warped image from the original view to the novel view. Hence, we can synthesize plausible novel view images with multi-view consistency. To address the unavailability of ground-truth images, we re-warp the image in the novel view back to the original view and feed it to SVINet. Hence, the loss can be calculated between the inpainting result and the input image. Some visual examples obtained by WarpGAN are given in Fig. 1.
27
+
28
+ In summary, the contributions of this paper are as follows:
29
+
30
+ - We propose a novel 3D GAN inversion method, WarpGAN, which successfully introduces the warping-and-inpainting strategy into 3D GAN inversion, substantially enhancing the quality of occluded regions in novel view synthesis.
31
+ - We introduce a style-based novel view inpainting network, SVINet, by fully leveraging the symmetry prior and the same latent code generated by 3D GAN inversion, achieving multi-view consistency inpainting on the occluded regions of warped images in novel views
32
+ - We perform extensive experiments to validate the superiority of WarpGAN, showing the great potential of the warping-and-inpainting strategy in 3D GAN inversion.
33
+
34
+ # 2 Related work
35
+
36
+ 3D-Aware GANs. Recent advancements in 3D-Aware GANs [2, 5, 6, 14, 29, 39, 41] effectively combine the high-quality 2D image synthesis of StyleGAN [20, 21] with the multi-view synthesis
37
+
38
+ capability of NeRF [27], advancing high-quality image synthesis from 2D to 3D and enabling multi-view image generation. These methods typically employ a two-stage generation pipeline, where a low-resolution raw image and feature maps are rendered, followed by upsampling to high-resolution using 2D CNN layers. Such a way ensures geometric consistency across multiple views and achieves impressive photorealism. In this paper, we leverage EG3D [5] as our 3D-aware GAN architecture, which introduces a hybrid explicit-implicit 3D representation (known as the tri-plane).
39
+
40
+ GAN Inversion. Although recent 2D GAN inversion methods [42] have achieved promising editing performance, they suffer from severe flickering and inevitable multi-view inconsistency when editing 3D attributes (e.g., head pose) since the pretrained generator is not 3D-aware. Hence, 3D GAN inversion is developed to maintain multi-view consistency when rendering novel viewpoints. However, directly transferring 2D methods to 3D without effectively incorporating 3D information will inevitably lead to geometry collapse and artifacts.
41
+
42
+ Similar to 2D GAN inversion, 3D GAN inversion can be categorized into optimization-based methods and encoder-based methods. Some optimization-based methods [23, 43, 45] generate multiple pseudo-images from different viewpoints to facilitate optimization. For instance, HFGI3D [43] leverages visibility analysis to achieve pseudo-multi-view optimization; SPI [45] utilizes the facial symmetry prior to synthesize pseudo multi-view images; and Pose Opt. [23] simultaneously optimizes camera pose and latent codes. In addition, In-N-Out [44] optimizes a triplane for out-of-distribution object reconstruction and employs composite volume rendering. Encoder-based methods project the input image into the latent space of the 3D GAN generator and then employ the generative capacity of the 3D GAN to synthesize novel-view images, while fully utilizing the input image to reconstruct the visible regions of the novel-view images. For example, GOAE [46] computes the residual between the input image and the reconstructed image to complement the $\mathcal{F}$ space of the generator, and introduces an occlusion-aware mix tri-plane for novel-view image generation; Triplanenet [3] calculates an offset for the triplane based on the residual and proposes a facial symmetry prior loss; and Dual Encoder [4] employs two encoders (one for visible regions and the other for occluded regions) for inversion and introduces an occlusion-aware triplane discriminator to enhance both fidelity and realism.
43
+
44
+ Our method is intrinsically different from existing methods that rely heavily on 3D GAN generative priors to generate occluded regions. Our method introduces a novel inpainting network to fill the occluded regions, facilitating the generation of rich details.
45
+
46
+ Depth-based Warping for Single-shot Novel View Synthesis. Some 3D GAN inversion methods [23, 43, 45] use depth-based warping to synthesize pseudo multi-view images for optimization. SPI [45] warps the input image to an adjacent view for pseudo-supervision. Pose Opt. [23] warps the image from the canonical viewpoint to the input viewpoint to assist training. HFGI3D [43] utilizes a 3D GAN to fill the occluded regions of the warped image from the input view to novel views, synthesizing several pseudo novel-view images. However, these methods only rely on a 3D GAN to generate occluded regions, failing to achieve satisfactory results in occluded regions under complex scenarios.
47
+
48
+ Recently, some methods follow the warping-and-inpainting strategy on single-shot NVs for general scenes [11, 30, 35]. They first predict a depth map for the input image, then warp the input image to a novel view using the depth map, and finally perform inpainting on the occluded regions in the novel view. This way can effectively preserve the information of the input image while leveraging the powerful inpainting capability of 2D inpainting networks to generate reasonable content for occluded regions. Inspired by this strategy, we introduce a 2D inpainting network into 3D GAN inversion by effectively exploiting the symmetry prior and the latent code of the input image.
49
+
50
+ # 3 Methodology
51
+
52
+ # 3.1 Overview
53
+
54
+ As shown in Fig. 2, our WarpGAN consists of a 3D GAN inversion network (including a 3D GAN inversion encoder and a 3D-aware GAN) and a style-based novel view inpainting network (SVINet). First, we utilize a 3D GAN inversion encoder $\mathrm{E}_{w^{+}}$ to project the input image $\mathbf{I}$ into the latent space $\mathcal{W}^{+}$ of the 3D GAN generator, obtaining the latent code $w^{+}$ . Based on this, we utilize a rendering decoder to render the depth map $\mathbf{D}$ of $\mathbf{I}$ and the novel view image $\hat{\mathbf{I}}_{novel}^{w^{+}}$ . Under the guidance of the depth map $\mathbf{D}$ , we warp the input image $\mathbf{I}$ from the original view $c$ to the novel view $c_{novel}$ , thereby obtaining the warped image $\mathbf{I}_{c\rightarrow c_{novel}}^{warp}$ and the occluded regions $\mathbf{M}_{c\rightarrow c_{novel}}^{o}$ of the input image in the
55
+
56
+ ![](images/a45f0715cd026c5ac39fcc09e741668fe0aa113601f3f4347da528a21eaf69c7.jpg)
57
+ Figure 2: Overview of our WarpGAN, which consists of a 3D GAN inversion network and a style-based novel view inpainting network (SVINet). The "Forward warp" flow (blue arrows) illustrates the inference process of novel view synthesis. During model training, we also require the "Reverse warp" flow (red arrows) to warp the novel view image back to the original view for loss computation.
58
+
59
+ target view, that is,
60
+
61
+ $$
62
+ \mathbf {I} _ {c \rightarrow c _ {\text {n o v e l}}} ^ {\text {w a r p}}, \mathbf {M} _ {c \rightarrow c _ {\text {n o v e l}}} ^ {o} = \operatorname {w a r p} (\mathbf {I}; \mathbf {D}, \pi_ {c \rightarrow c _ {\text {n o v e l}}}, K), \tag {1}
63
+ $$
64
+
65
+ where $\pi_{c\to c_{novel}}$ is a relative camera pose between $c$ and $c_{novel}$ , $K$ is the camera intrinsic matrix, and $\operatorname{warp}(\cdot)$ is a geometric warping function [28, 35] which unprojects pixels of the input image $\mathbf{I}$ with its depth map $\mathbf{D}$ to the 3D space, and reprojects them based on $\pi_{c\to c_{novel}}$ and $K$ .
66
+
67
+ Then, we use $\hat{\mathbf{I}}_{novel}^{w^+}$ to fill in the occluded regions of $\mathbf{I}_{c\rightarrow c_{novel}}^{warp}$ , serving as the initial result $\hat{\mathbf{I}}_{novel}^{initial}$ for the occluded regions, which can be formulated as
68
+
69
+ $$
70
+ \hat {\mathbf {I}} _ {n o v e l} ^ {i n i t i a l} = \mathbf {I} _ {c \rightarrow c _ {n o v e l}} ^ {w a r p} + \mathbf {M} _ {c \rightarrow c _ {n o v e l}} ^ {o} \cdot \hat {\mathbf {I}} _ {n o v e l} ^ {w ^ {+}}. \tag {2}
71
+ $$
72
+
73
+ Subsequently, the initial result $\hat{\mathbf{I}}_{\text {novel }}^{\text {initial }}$ is fed into SVINet for further inpainting, giving the final output $\hat{\mathbf{I}}_{\text {novel }}$ of WarpGAN. Notably, we employ symmetry-aware feature extraction and modulate the convolutions of the inpainting network with $w^{+}$ during the inpainting process. We also construct a style-based loss to ensure consistency between the generated image in the novel view and the original view image.
74
+
75
+ # 3.2 3D GAN Inversion Encoder
76
+
77
+ Similar to existing encoder-based 3D GAN inversion methods, our 3D GAN inversion encoder $\mathrm{E}_{w^{+}}$ projects an input image $\mathbf{I}$ with the camera pose $c$ into the latent space $\mathcal{W}^+$ of the pre-trained 3D GAN, obtaining the latent code $w^{+} = \mathrm{E}_{w^{+}}(\mathbf{I})$ . Then, we leverage the generator $\mathrm{G}(\cdot)$ of 3D GAN to generate the tri-plane and use the rendering decoder $\mathcal{R}$ to render images at specified camera poses. Based on above, we perform image reconstruction $\hat{\mathbf{I}}^{w^{+}} = \mathcal{R}(\mathrm{G}(w^{+}), c)$ by specifying the camera pose as $c$ . In this way, we obtain the novel view image $\hat{\mathbf{I}}_{novel}^{w^{+}}$ corresponding to the novel camera pose $c_{novel}$ . Under the principles of NeRF, we replace the color of the sampling points with the distance to the camera during the rendering process, obtaining the depth maps $\mathbf{D}$ and $\mathbf{D}_{novel}$ . More implementation details can be found in the Appendix.
78
+
79
+ Inspired by GOAE [46], we employ a pyramid-structured Swin-Transformer [26] as the backbone of the encoder, based on which we leverage feature layers at different scales to generate latent codes at various levels.
80
+
81
+ Since our dataset contains only single-view images, we train $\mathrm{E}_{w^{+}}$ using a reconstruction loss $\mathcal{L}_{w^{+}}$ , which includes a pixel-wise (MSE) loss $\mathcal{L}_2$ , a perceptual loss $\mathcal{L}_{\mathrm{LPIPS}}$ [48], and an identity loss $\mathcal{L}_{\mathrm{ID}}$ with a pre-trained ArcFace network [12]:
82
+
83
+ $$
84
+ \mathcal {L} _ {w ^ {+}} \left(\hat {\mathbf {I}} ^ {w ^ {+}}, \mathbf {I}\right) = \lambda_ {2} \mathcal {L} _ {2} \left(\hat {\mathbf {I}} ^ {w ^ {+}}, \mathbf {I}\right) + \lambda_ {\text {L P I P S}} \mathcal {L} _ {\text {L P I P S}} \left(\hat {\mathbf {I}} ^ {w ^ {+}}, \mathbf {I}\right) + \lambda_ {\mathrm {I D}} ^ {w ^ {+}} \mathcal {L} _ {\mathrm {I D}} \left(\hat {\mathbf {I}} ^ {w ^ {+}}, \mathbf {I}\right), \tag {3}
85
+ $$
86
+
87
+ where $\lambda_{2}$ , $\lambda_{\mathrm{LPIPS}}$ , and $\lambda_{\mathrm{ID}}^{w+}$ denote the loss weights for $\mathcal{L}_2$ , $\mathcal{L}_{\mathrm{LPIPS}}$ , and $\mathcal{L}_{\mathrm{ID}}$ , respectively.
88
+
89
+ # 3.3 Style-Based Novel View Inpainting Network (SVINet)
90
+
91
+ Due to the existence of occluded regions in the novel view, the warped image contains "holes" (see Fig. 2 for an illustration). To generate high-quality novel-view images, we propose a style-based novel view inpainting network (SVINet) to fill in the "holes" in the warped image.
92
+
93
+ As shown in Fig. 2, our SVINet follows the traditional "encode-inpaint-decoder" architecture [10, 24, 37], consisting of three sub-networks: $N_{E}$ , $N_{I}$ , and $N_{D}$ . Technically, $N_{E}$ is first used to extract features from the model input while performing downsampling. Then, the inpainting operation is performed in the feature space by using $N_{I}$ . Finally, $N_{D}$ is used to upsample the features to obtain the inpainted image.
94
+
95
+ # 3.3.1 Symmetry-Aware Feature Extraction
96
+
97
+ We first use the novel-view image $\hat{\mathbf{I}}_{\text{novel}}^{w^+}$ obtained from 3D GAN inversion to fill in the occluded regions in the warped image $\mathbf{I}_{c \rightarrow c_{\text{novel}}}^{warp}$ (Eq. (1)), resulting in an initial inpainting result $\hat{\mathbf{I}}_{\text{novel}}^{\text{initial}}$ (Eq. (2)). We then feed $\hat{\mathbf{I}}_{\text{novel}}^{\text{initial}}$ into $N_E$ to obtain the feature $\mathbf{F}$ . In addition, we also propose to leverage the facial symmetry [43, 45] by warping the mirrored input image $\mathbf{I}_{\text{mirror}}$ to the target view $c_{\text{novel}}$ , obtaining $\mathbf{I}_{\text{mirror}_{c_{\text{mirror}} \rightarrow c_{\text{novel}}}}^{warp}$ . The mirrored image is then processed in the same manner as described above and fed into $N_E$ to obtain the mirror feature $\mathbf{F}_{\text{mirror}}$ .
98
+
99
+ Subsequently, we utilize $\mathbf{F}$ and $\mathbf{F}_{\text {mirror }}$ to predict the scale map $\mathbf{F}_s$ and the translation map $\mathbf{F}_t$ , which can be used to refine $\mathbf{F}$ via featurewise linear modulation (FiLM) [32], obtaining $\mathbf{F}_r$ , that is,
100
+
101
+ $$
102
+ \begin{array}{l} \left. \left\{\mathbf {F} _ {s}, \mathbf {F} _ {t} \right\} = \left\{\phi_ {s} \left(\left[ \mathbf {F}, \mathbf {F} _ {\text {m i r r o r}} \right] _ {1}\right), \phi_ {t} \left(\left[ \mathbf {F}, \mathbf {F} _ {\text {m i r r o r}} \right] _ {1}\right) \right\}, \right. \\ \mathbf {F} _ {r} = \mathbf {F} _ {s} \odot \mathbf {F} + \mathbf {F} _ {t}, \\ \end{array}
103
+ $$
104
+
105
+ where $\phi_s$ and $\phi_t$ are convolutional neural networks; $[,]_1$ denotes concatenation along the 1th dimension, i.e., the channel dimension; $\odot$ denotes the Hadamard product.
106
+
107
+ Next, $\mathbf{F}^r$ is successively fed into $N_I$ and $N_D$ to obtain the inpainting result $\hat{\mathbf{I}}_{\text{novel}}$ .
108
+
109
+ # 3.3.2 Style-Based Impainting
110
+
111
+ Inpainting networks typically rely on the information of the input image to fill in the missing regions. However, due to the limited information contained in single-view images, using only this information for inpainting may lead to the issue of multi-view inconsistency. To address the consistency issue, motivated by the fact that images of the same object from different viewpoints share the same latent code in 3D GANs, we introduce the latent code to control the image inpainting process.
112
+
113
+ Technically, we modulate the convolutions [21, 24] in the "inpaint" and "decoder" parts of the inpainting network using the latent code $w^{+}$ obtained from $\mathrm{E}_{w^{+}}$ . This modulation of the convolutions facilitates us to control the inpainting process for occluded regions, achieving multi-view consistency in the generated images.
114
+
115
+ Specifically, we first employ a mapping function $\mathcal{A}$ to obtain the style code $s = \mathcal{A}(w^{+})$ . Then the weights of the convolutions $w$ are modulated as
116
+
117
+ $$
118
+ \begin{array}{l} w _ {i j k} ^ {\prime} = s _ {i} \cdot w _ {i j k}, \\ w _ {i j k} ^ {\prime \prime} = w _ {i j k} ^ {\prime} / \sqrt {\sum_ {i , k} w _ {i j k} ^ {\prime 2} + \epsilon}, \tag {5} \\ \end{array}
119
+ $$
120
+
121
+ where $w''$ denotes the final modulated weights; $s_i$ is the scale corresponding to the $i$ th input feature map; $j$ and $k$ enumerate the output feature maps and spatial footprint of the convolution, respectively.
122
+
123
+ # 3.3.3 Training strategy
124
+
125
+ Real data. Since our real dataset contains only single-view images, no target-view images can be used to compute the loss and update the model parameters when synthesizing images from novel views. To address this, we propose to re-warp the warped image from the novel view back to the original view, and then compute the loss between the inpainting result and the input image.
126
+
127
+ Specifically, for the input image $\mathbf{I}$ , we first warp it to the novel view $c_{\text{novel}}$ to obtain $\mathbf{I}_{c \rightarrow c_{\text{novel}}}^{\text{warp}}$ , and then inpaint it using SVINet to get $\hat{\mathbf{I}}_{\text{novel}}$ . Next, we re-warp $\mathbf{I}_{c \rightarrow c_{\text{novel}}}^{\text{warp}}$ back to the source view $c$ and inpaint it again to obtain $\hat{\mathbf{I}}_{\text{re-warp}}^{\text{re-warp}}$ . Based on the above, given the input image $\mathbf{I}$ , we obtain two inpainted images $\hat{\mathbf{I}}_{\text{novel}}$ and $\hat{\mathbf{I}}_{\text{re-warp}}^{\text{re-warp}}$ for loss computation.
128
+
129
+ Synthetic data. In addition to real data, we also utilize synthetic data to assist in training our model. We sample a latent code $w_{synth}$ from the latent space of 3D GAN and generate two images $\mathbf{I}_s^{synth}$ and $\mathbf{I}_t^{synth}$ from different viewpoints. We then warp $\mathbf{I}_s^{synth}$ from the source view to the target view and input it into SVINet to obtain the inpainted image $\hat{\mathbf{I}}_t^{synth}$ . Finally, we compute the loss between $\hat{\mathbf{I}}_t^{synth}$ and $\mathbf{I}_t^{synth}$ .
130
+
131
+ Loss function. Our loss function consists of three components: the reconstruction loss, the consistency loss, and the adversarial loss. The reconstruction loss $\mathcal{L}_{\mathrm{rec}}$ includes the pixel-wise MAE loss $\mathcal{L}_1$ , the perceptual loss $\mathcal{L}_{\mathrm{P}}$ [37], and the identity loss $\mathcal{L}_{\mathrm{ID}}$ [12]:
132
+
133
+ $$
134
+ \mathcal {L} _ {\text {r e c}} (\hat {\mathbf {I}}, \mathbf {I}) = \lambda_ {1} \mathcal {L} _ {1} (\hat {\mathbf {I}} - \mathbf {I}) + \lambda_ {\mathrm {P}} \mathcal {L} _ {\mathrm {P}} (\hat {\mathbf {I}}, \mathbf {I}) + \lambda_ {\mathrm {I D}} \mathcal {L} _ {\mathrm {I D}} (\hat {\mathbf {I}}, \mathbf {I}), \tag {6}
135
+ $$
136
+
137
+ where $\lambda_{1},\lambda_{\mathrm{P}}$ , and $\lambda_{\mathrm{ID}}$ denote the loss weights for $\mathcal{L}_1,\mathcal{L}_{\mathrm{P}}$ , and $\mathcal{L}_{\mathrm{ID}}$ , respectively; $\bar{\mathbf{I}}$ and $\mathbf{I}$ represent the input image and the generated image, respectively.
138
+
139
+ To ensure multi-view consistency, we introduce the consistency loss $\mathcal{L}_{\mathrm{c}}$ , which computes the MSE between the latent codes of the original image and the inpainted image. This loss is used to control the multi-view consistency of the generated images:
140
+
141
+ $$
142
+ \mathcal {L} _ {\mathrm {c}} (\hat {\mathbf {I}}, \mathbf {I}) = | | \mathrm {E} _ {w ^ {+}} (\hat {\mathbf {I}}), \mathrm {E} _ {w ^ {+}} (\mathbf {I}) | | _ {2}. \tag {7}
143
+ $$
144
+
145
+ To further enhance the quality of the inpainted images, we also use an adversarial loss:
146
+
147
+ $$
148
+ \mathcal {L} _ {\mathrm {a d v}} ^ {G} = - \mathbb {E} [ \log (D (\hat {x})) ], \tag {8}
149
+ $$
150
+
151
+ $$
152
+ \mathcal {L} _ {\mathrm {a d v}} ^ {D} = - \mathbb {E} [ \log (D (x)) ] - \mathbb {E} [ \log (1 - D (\hat {x})) ] + \gamma \mathbb {E} [ \| \nabla D (x) \| _ {2} ], \tag {9}
153
+ $$
154
+
155
+ where $x$ denotes the real and synthetic images (i.e., $\mathbf{I}$ and $\hat{\mathbf{I}}_t^{synth}$ ); $\hat{x}$ represents the inpainted images (i.e., $\hat{\mathbf{I}}_{\text{novel}}, \hat{\mathbf{I}}^{\text{re-warp}}$ , and $\hat{\mathbf{I}}_t^{synth}$ ); $D$ denotes the discriminator [10, 24, 37].
156
+
157
+ In summary, the loss function for SVINet can be formulated as follows:
158
+
159
+ $$
160
+ \begin{array}{l} \mathcal {L} _ {\mathrm {S V I N e t}} = \lambda_ {\mathrm {r e c}} \mathcal {L} _ {\mathrm {r e c}} \left(\left[ \hat {\mathbf {I}} ^ {r e - w a r p}, \hat {\mathbf {I}} _ {t} ^ {s y n t h} \right] _ {0}, \left[ \mathbf {I}, \mathbf {I} _ {t} ^ {s y n t h} \right] _ {0}\right) \\ + \lambda_ {\mathrm {c}} \mathcal {L} _ {\mathrm {c}} \left(\left[ \hat {\mathbf {I}} _ {\text {n o v e l}}, \hat {\mathbf {I}} ^ {\text {r e - w a r p}}, \hat {\mathbf {I}} _ {t} ^ {\text {s y n t h}} \right] _ {0}, \left[ \mathbf {I}, \mathbf {I}, \mathbf {I} _ {t} ^ {\text {s y n t h}} \right] _ {0}\right) + \lambda_ {\mathrm {a d v}} \mathcal {L} _ {\mathrm {a d v}} ^ {G}, \tag {10} \\ \end{array}
161
+ $$
162
+
163
+ where $[,]_0$ denotes concatenation along the 0-th dimension (i.e., the batch dimension); $\lambda_{\mathrm{rec}}$ , $\lambda_{\mathrm{c}}$ , and $\lambda_{\mathrm{adv}}$ denote the loss weights for $\mathcal{L}_{\mathrm{rec}}$ , $\mathcal{L}_{\mathrm{c}}$ , and $\mathcal{L}_{\mathrm{adv}}^G$ , respectively.
164
+
165
+ # 4 Experiments
166
+
167
+ # 4.1 Experimental Settings
168
+
169
+ Datasets. Our experiments mainly focus on face datasets. We use the FFHQ dataset [20] and 100K pairs of synthetic data for training. The synthetic pairs $\{\mathbf{I}_s^{synth},\mathbf{I}_t^{synth}\}$ are generated from EG3D [5], sharing the same latent code $w_{synth}$ but rendered with different camera poses. To evaluate the generalization ability of our method, we employ the CelebA-HQ dataset [19] and the multi-view MEAD dataset [40] for testing. We preprocess the images in the datasets and extract their camera poses in the same manner as [5].
170
+
171
+ Implementation Details. For all experiments, we employ the EG3D [5] generator pre-trained on FFHQ. For the 3D GAN inversion encoder $\mathrm{E}_{w^{+}}$ , we set the batch size to 4 and train it for $500\mathrm{K}$
172
+
173
+ Table 1: Comparisons with state-of-the-art methods on the CelebA-HQ and MEAD datasets.
174
+
175
+ <table><tr><td rowspan="3">Category</td><td rowspan="3">Method</td><td colspan="2">CelebA-HQ</td><td colspan="6">MEAD</td><td rowspan="3">Time (s)↓</td></tr><tr><td rowspan="2">FID↓</td><td rowspan="2">ID↑</td><td colspan="2">LPIPS ↓</td><td colspan="2">FID ↓</td><td colspan="2">ID ↑</td></tr><tr><td>±30°</td><td>±60°</td><td>±30°</td><td>±60°</td><td>±30°</td><td>±60°</td></tr><tr><td rowspan="4">Optimization</td><td>SG2 W+</td><td>26.09</td><td>0.7369</td><td>0.2910</td><td>0.3372</td><td>39.30</td><td>64.47</td><td>0.7992</td><td>0.7533</td><td>43.72</td></tr><tr><td>PTI</td><td>25.70</td><td>0.7616</td><td>0.2771</td><td>0.3341</td><td>44.23</td><td>66.00</td><td>0.8089</td><td>0.7582</td><td>62.65</td></tr><tr><td>Pose Opt.</td><td>29.04</td><td>0.7500</td><td>0.2990</td><td>0.3428</td><td>52.25</td><td>73.23</td><td>0.7954</td><td>0.7405</td><td>91.60</td></tr><tr><td>HFGI3D</td><td>24.30</td><td>0.7641</td><td>0.2775</td><td>0.3494</td><td>51.24</td><td>79.81</td><td>0.8019</td><td>0.7370</td><td>264.5</td></tr><tr><td rowspan="3">Encoder</td><td>pSp</td><td>38.46</td><td>0.7375</td><td>0.3116</td><td>0.3720</td><td>65.21</td><td>94.34</td><td>0.7900</td><td>0.7401</td><td>0.05430</td></tr><tr><td>GOAE</td><td>35.41</td><td>0.7498</td><td>0.2818</td><td>0.3453</td><td>59.69</td><td>86.23</td><td>0.8109</td><td>0.7370</td><td>0.07999</td></tr><tr><td>Triplanenet</td><td>32.65</td><td>0.7706</td><td>0.3379</td><td>0.4103</td><td>76.62</td><td>130.55</td><td>0.8059</td><td>0.7135</td><td>0.1214</td></tr><tr><td></td><td>Ours</td><td>19.12</td><td>0.7882</td><td>0.2490</td><td>0.3008</td><td>38.15</td><td>64.01</td><td>0.8315</td><td>0.7741</td><td>0.08390</td></tr></table>
176
+
177
+ ![](images/8679da22d64d399625556808c02d6a6197dd9770832275c3d29cf287ddc37aa5.jpg)
178
+ Figure 3: Comparisons of novel view synthesis on the CelebA-HQ dataset between our WarpGAN and several state-of-the-art methods.
179
+
180
+ iterations on the FFHQ dataset. We use the Ranger optimizer, which combines Rectified Adam [25] with the Lookahead technique [47], with learning rates of 1e-4 for $\mathrm{E}_{w^{+}}$ . The values of $\lambda_{2}$ , $\lambda_{\mathrm{LPIPS}}$ , and $\lambda_{\mathrm{ID}}^{w^{+}}$ in Eq. (3) are set to 1.0, 0.8, and 0.1. For SVINet, we set the batch size to 2 and train it for 300K iterations on both the FFHQ dataset and synthetic data pairs. For the novel view camera poses during the training process, we sample from the camera poses of the pose-rebalanced FFHQ dataset [5]. We use the Adam optimizer [22], with learning rates of 1e-3 and 1e-4 for the SVINet and discriminator, respectively. The values of $\lambda_{1}$ , $\lambda_{\mathrm{P}}$ , and $\lambda_{\mathrm{ID}}$ in Eq. (6) are set to 10.0, 30.0, and 0.1, respectively. The values of $\lambda_{\mathrm{rec}}$ , $\lambda_{\mathrm{c}}$ , and $\lambda_{\mathrm{adv}}$ in Eq. (10) are set to 1.0, 0.1, and 10.0, respectively.
181
+
182
+ Baselines. We compare our WarpGAN with several 3D GAN inversion methods, including optimization-based methods (such as SG2 $\mathcal{W}^+$ [1], PTI [34], Pose Opt. [23], and HFGI3D [43]) and encoder-based methods (such as pSp [33], GOAE [46], Triplanenet [3], and Dual Encoder [4]). Note that Dual Encoder employs a 3D GAN other than EG3D and removes the background during training. This is different from our experimental setup, we only compare it in the qualitative analysis.
183
+
184
+ Evaluation metrics. We perform novel view synthesis evaluation on the CelebA-HQ dataset and the MEAD dataset. For the CelebA-HQ dataset, we compute the Fréchet Inception Distance (FID) [17] and ID similarity [12] between the original images and the novel view images. For the multi-view MEAD dataset, each person includes five face images with increasing yaw angles (front, $\pm 30^{\circ}$ , and $\pm 60^{\circ}$ ). We use the front image as input and synthesize the other four views. We then compute the LPIPS [48], FID, and ID similarity between the synthesized images and their corresponding ground-truth images. The inference times (Time) in Table 1 are measured on a single Nvidia GeForce RTX 4090 GPU.
185
+
186
+ ![](images/7b592177aa20acaf08e462c12c77484425948bb6c71ce9cde92df67d4178f641.jpg)
187
+ Figure 4: Comparisons of different methods on the MEAD dataset for synthesizing images of the other four views (R60, R30, L30, and L60) using the front image as input.
188
+
189
+ # 4.2 Comparisons with State-of-the-Art Methods
190
+
191
+ Quantitative Evaluation. As shown in Table 1, we provide the performance of different methods on the CelebA-HQ dataset and the MEAD dataset. It can be clearly observed that optimization-based methods achieve better performance than encoder-based methods, but at the cost of significantly higher inference times. Among them, HFGI3D, which performs optimization twice using PTI (once for filling the occluded regions of warped images and once for multi-view optimization), shows substantial performance improvement but suffers from slow inference times. In contrast, our WarpGAN, which has an inference time comparable to encoder-based methods, surpasses the performance of optimization-based methods. The excellent performance on the MEAD dataset demonstrates that our method is capable of effectively preserving multi-view consistency.
192
+
193
+ Qualitative Evaluation. We provide visualization results of novel view synthesis in Fig. 3 and Fig. 4. By successfully integrating the warping-and-inpainting strategy into 3D GAN inversion, our method can better preserve facial details and generate more reasonable occluded regions. Moreover, our method is capable of maintaining 3D consistency in novel views more naturally.
194
+
195
+ # 4.3 Ablation Studies
196
+
197
+ Table 2: Ablation on different components of our WarpGAN.
198
+
199
+ <table><tr><td>Name</td><td>Model</td><td>FID ↓</td><td>ID ↑</td></tr><tr><td>A</td><td>Ew+</td><td>36.07</td><td>0.7437</td></tr><tr><td>B</td><td>w/o SVINet</td><td>29.28</td><td>0.7735</td></tr><tr><td>C</td><td>w/o Modw+ &amp; Lc</td><td>19.71</td><td>0.7879</td></tr><tr><td>D</td><td>w/o Modw+</td><td>19.47</td><td>0.7880</td></tr><tr><td>E</td><td>w/o symmetry</td><td>20.04</td><td>0.7825</td></tr><tr><td>F</td><td>w/o synth data</td><td>19.18</td><td>0.7880</td></tr><tr><td>G</td><td>Full Model</td><td>19.12</td><td>0.7882</td></tr></table>
200
+
201
+ To investigate the contributions of key components in our method, we conduct ablation studies. In Table 2, we compare the quality of novel view synthesis using different model variants on the CelebA-HQ dataset.
202
+
203
+ Comparing "B" and "G" clearly demonstrates the significant role of SVINet in inpainting occluded regions. Comparing "C", "D", and "G" shows that modulating the convolutions of SVINet with $w^{+}$ and incorporating $\mathcal{L}_c$ enhance the performance of our method. Comparing "E"
204
+
205
+ ![](images/8ebd1b64818f76cb206fd76f9fed0ba4e89d2163a5e7949f0f270eb31a6df6f0.jpg)
206
+
207
+ ![](images/3fbab413b905b00f7fc445534c0059889b6b8c96ae8833cab5cb3783f128b5ce.jpg)
208
+
209
+ ![](images/499b111722a3a5bdacc3082b87cfbdc1879c044a9da3ed130e333a5b22cf8d2d.jpg)
210
+ Figure 5: (a) Qualitative comparisons of the Full Model with model variants "C", "D", and "E"; (b) Some failure cases; (c) Comparisons of image attribute editing effects with PTI and HFGI3D.
211
+
212
+ and "G" indicates that leveraging facial symmetry prior helps generate occluded regions in novel views. Comparing "F" and "G" reveals that training with synthetic data slightly improves the quality of novel view synthesis. We also qualitatively compare "C", "D", "E", and "G" (Full Model) in Fig. 5(a). Incorporating the latent code to control the inpainting process of SVINet and the symmetry prior can provide more information, reduce blurring and artifacts, and generate more detailed results.
213
+
214
+ # 4.4 Editing Application
215
+
216
+ Since our WarpGAN achieves novel view synthesis by inpainting warped images, the visible parts of the novel view images are minimally affected by the latent code. Consequently, manipulating the latent code alone does not enable attribute editing of the image. To address this issue, similar to HFGI3D [43], we utilize WarpGAN to synthesize a series of novel view images, which are then fed into PTI [34] for optimization. This process yields an optimized latent code $w_{opt}^{+}$ and a fine-tuned 3D GAN generator. In this way, attribute editing of the input image and novel view rendering can be achieved by editing $w_{opt}^{+}$ [15, 31, 36] and modifying the camera pose $c$ . As shown in Fig. 5(c), we perform attribute editing on the input image for four attributes: "Glasses", "Anger", "Old", and "Young", and compare the results with those from PTI and HFGI3D. It can be observed that the edited images obtained by using multi-view images synthesized by WarpGAN for optimization assistance exhibit higher fidelity and appear more natural.
217
+
218
+ # 5 Conclusion
219
+
220
+ In this paper, motivated by the achievement of the warping-and-inpainting strategy in 3D scene generation, we successfully integrate image inpainting with 3D GAN inversion and propose a novel 3D GAN inversion method, WarpGAN, for high-quality novel view synthesis from a single image. Our WarpGAN consists of a 3D GAN inversion network and SVINet. Specifically, we first obtain the depth of the input image using 3D GAN inversion, then apply depth-based warping to the input image to obtain the warped image, and finally use SVINet to fill in the occluded regions of the warped image. Notably, our SVINet leverages symmetry prior and the latent code for multi-view consistency inpainting. Extensive qualitative and quantitative experiments demonstrate that our method outperforms existing state-of-the-art optimization-based and encoder-based methods.
221
+
222
+ Limitations. Due to the inevitable errors in the depth map [11, 30, 35], the warped image sometimes become unreliable, which in turn prevents our SVINet from eliminating such artifacts. As illustrated in Fig. 5(b), when the angle variation is small, SVINet can alleviate the deformation of the eyes. However, as the angle of change increases, the output of SVINet deteriorates.
223
+
224
+ # Acknowledgments and Disclosure of Funding
225
+
226
+ This work was supported by the National Natural Science Foundation of China under Grant 62372388 and Grant U21A20514, the Major Science and Technology Plan Project on the Future Industry Fields of Xiamen City under Grant 3502Z20241029 and Grant 3502Z20241027, and the Fundamental Research Funds for the Central Universities under Grant 20720240076 and Grant ZYGX2021J004.
227
+
228
+ # References
229
+
230
+ [1] R. Abdal, Y. Qin, and P. Wonka. Image2StyleGAN++: How to edit the embedded images? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8296-8305, 2020.
231
+ [2] S. An, H. Xu, Y. Shi, G. Song, U. Y. Ogras, and L. Luo. Panohead: Geometry-aware 3d full-head synthesis in 360deg. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20950-20959, 2023.
232
+ [3] A. R. Bhattacharai, M. Nießner, and A. Sevastopolsky. Triplanenet: An encoder for eg3d inversion. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 3055-3065, 2024.
233
+ [4] B. B. Bilecen, A. Gokmen, and A. Dundar. Dual encoder GAN inversion for high-fidelity 3d head reconstruction from single images. Advances in Neural Information Processing Systems, pages 87357-87385, 2024.
234
+ [5] E. R. Chan, C. Z. Lin, M. A. Chan, K. Nagano, B. Pan, S. De Mello, O. Gallo, L. J. Guibas, J. Tremblay, S. Khamis, et al. Efficient geometry-aware 3d generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16123-16133, 2022.
235
+ [6] E. R. Chan, M. Monteiro, P. Kellnhofer, J. Wu, and G. Wetzstein. pi-GAN: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5799-5809, 2021.
236
+ [7] X. Chen, H. Fan, R. Girshick, and K. He. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020.
237
+ [8] L. Chi, B. Jiang, and Y. Mu. Fast fourier convolution. Advances in Neural Information Processing Systems, pages 4479-4488, 2020.
238
+ [9] Y. Choi, Y. Uh, J. Yoo, and J.-W. Ha. Stargan v2: Diverse image synthesis for multiple domains. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8188-8197, 2020.
239
+ [10] T. Chu, J. Chen, J. Sun, S. Lian, Z. Wang, Z. Zuo, L. Zhao, W. Xing, and D. Lu. Rethinking fast fourier convolution in image inpainting. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 23195–23205, 2023.
240
+ [11] J. Chung, S. Lee, H. Nam, J. Lee, and K. M. Lee. Luciddreamer: Domain-free generation of 3d gaussian splatting scenes. arXiv preprint arXiv:2311.13384, 2023.
241
+ [12] J. Deng, J. Guo, N. Xue, and S. Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4690-4699, 2019.
242
+ [13] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. Advances in Neural Information Processing Systems, 2014.
243
+ [14] J. Gu, L. Liu, P. Wang, and C. Theobalt. StyleNeRF: A style-based 3d-aware generator for high-resolution image synthesis. In Proceedings of International Conference on Learning Representations, 2022.
244
+ [15] E. Härkönen, A. Hertzmann, J. Lehtinen, and S. Paris. GANSpace: Discovering interpretable GAN controls. Advances in Neural Information Processing Systems, pages 9841-9850, 2020.
245
+ [16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778, 2016.
246
+ [17] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. GANs trained by a two time-scale update rule converge to a local nash equilibrium. Advances in Neural Information Processing Systems, 30, 2017.
247
+
248
+ [18] J. T. Kajiya and B. P. Von Herzen. Ray tracing volume densities. ACM SIGGRAPH Computer Graphics, pages 165-174, 1984.
249
+ [19] T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
250
+ [20] T. Karras, S. Laine, and T. Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4401-4410, 2019.
251
+ [21] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila. Analyzing and improving the image quality of StyleGAN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8110-8119, 2020.
252
+ [22] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
253
+ [23] J. Ko, K. Cho, D. Choi, K. Ryoo, and S. Kim. 3d GAN inversion with pose optimization. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2967-2976, 2023.
254
+ [24] W. Li, Z. Lin, K. Zhou, L. Qi, Y. Wang, and J. Jia. Mat: Mask-aware transformer for large hole image inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10758-10768, 2022.
255
+ [25] L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and J. Han. On the variance of the adaptive learning rate and beyond. In Proceedings of International Conference on Learning Representations, 2020.
256
+ [26] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10012-10022, 2021.
257
+ [27] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, pages 99-106, 2021.
258
+ [28] S. Niklaus and F. Liu. Softmax splatting for video frame interpolation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5437-5446, 2020.
259
+ [29] R. Or-El, X. Luo, M. Shan, E. Shechtman, J. J. Park, and I. Kemelmacher-Shlizerman. StyleSDF: High-resolution 3d-consistent image and geometry generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13503–13513, 2022.
260
+ [30] H. Ouyang, K. Heal, S. Lombardi, and T. Sun. Text2immersion: Generative immersive scene with 3d gaussians. arXiv preprint arXiv:2312.09242, 2023.
261
+ [31] O. Patashnik, Z. Wu, E. Shechtman, D. Cohen-Or, and D. Lischinski. StyleCLIP: Text-driven manipulation of StyleGAN imagery. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2085–2094, 2021.
262
+ [32] E. Perez, F. Strub, H. De Vries, V. Dumoulin, and A. Courville. Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI Conference on Artificial Intelligence, 2018.
263
+ [33] E. Richardson, Y. Alaluf, O. Patashnik, Y. Nitzan, Y. Azar, S. Shapiro, and D. Cohen-Or. Encoding in style: A StyleGAN encoder for image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2287–2296, 2021.
264
+ [34] D. Roich, R. Mokady, A. H. Bermano, and D. Cohen-Or. Pivotal tuning for latent-based editing of real images. ACM Transactions on Graphics, pages 1–13, 2022.
265
+ [35] J. Seo, K. Fukuda, T. Shibuya, T. Narihira, N. Murata, S. Hu, C.-H. Lai, S. Kim, and Y. Mitsufuji. Genwarp: Single image to novel views with semantic-preserving generative warping. Advances in Neural Information Processing Systems, 2024.
266
+ [36] Y. Shen, C. Yang, X. Tang, and B. Zhou. InterFaceGAN: Interpreting the disentangled face representation learned by GANs. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 2004–2018, 2020.
267
+ [37] R. Suvorov, E. Logacheva, A. Mashikhin, A. Remizova, A. Ashukha, A. Silvestrov, N. Kong, H. Goka, K. Park, and V. Lempitsky. Resolution-robust large mask inpainting with fourier convolutions. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2149–2159, 2022.
268
+
269
+ [38] O. Tov, Y. Alaluf, Y. Nitzan, O. Patashnik, and D. Cohen-Or. Designing an encoder for StyleGAN image manipulation. ACM Transactions on Graphics, pages 1-14, 2021.
270
+ [39] A. Trevithick, M. Chan, T. Takikawa, U. Iqbal, S. De Mello, M. Chandraker, R. Ramamoorthi, and K. Nagano. What you see is what you GAN: Rendering every pixel for high-fidelity geometry in 3d GANs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22765–22775, 2024.
271
+ [40] K. Wang, Q. Wu, L. Song, Z. Yang, W. Wu, C. Qian, R. He, Y. Qiao, and C. C. Loy. Mead: A large-scale audio-visual dataset for emotional talking-face generation. In Proceedings of European Conference on Computer Vision, pages 700–717, 2020.
272
+ [41] Y. Wu, J. Zhang, H. Fu, and X. Jin. Lpff: A portrait dataset for face generators across large poses. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 20327-20337, 2023.
273
+ [42] W. Xia, Y. Zhang, Y. Yang, J.-H. Xue, B. Zhou, and M.-H. Yang. GAN inversion: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 3121-3138, 2022.
274
+ [43] J. Xie, H. Ouyang, J. Piao, C. Lei, and Q. Chen. High-fidelity 3d GAN inversion by pseudo-multi-view optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 321-331, 2023.
275
+ [44] Y. Xu, Z. Shu, C. Smith, S. W. Oh, and J.-B. Huang. In-n-out: Faithful 3d GAN inversion with volumetric decomposition for face editing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7225-7235, 2024.
276
+ [45] F. Yin, Y. Zhang, X. Wang, T. Wang, X. Li, Y. Gong, Y. Fan, X. Cun, Y. Shan, C. Oztireli, et al. 3d GAN inversion with facial symmetry prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 342-351, 2023.
277
+ [46] Z. Yuan, Y. Zhu, Y. Li, H. Liu, and C. Yuan. Make encoder great again in 3d GAN inversion through geometry and occlusion-aware encoding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2437-2447, 2023.
278
+ [47] M. Zhang, J. Lucas, J. Ba, and G. E. Hinton. Lookahead optimizer: k steps forward, 1 step back. Advances in Neural Information Processing Systems, 32, 2019.
279
+ [48] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 586-595, 2018.
280
+
281
+ # NeurIPS Paper Checklist
282
+
283
+ # 1. Claims
284
+
285
+ Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
286
+
287
+ Answer: [Yes]
288
+
289
+ Justification: The abstract and introduction clearly outline the proposed WarpGAN method, its components, and the improvements in novel view synthesis, accurately reflecting the paper's contributions and scope.
290
+
291
+ Guidelines:
292
+
293
+ - The answer NA means that the abstract and introduction do not include the claims made in the paper.
294
+ - The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.
295
+ - The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
296
+ - It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
297
+
298
+ # 2. Limitations
299
+
300
+ Question: Does the paper discuss the limitations of the work performed by the authors?
301
+
302
+ Answer: [Yes]
303
+
304
+ Justification: We discuss the limitations of our method in Sec. 5.
305
+
306
+ Guidelines:
307
+
308
+ - The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.
309
+ - The authors are encouraged to create a separate "Limitations" section in their paper.
310
+ - The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.
311
+ - The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.
312
+ - The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.
313
+ - The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.
314
+ - If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.
315
+ - While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.
316
+
317
+ # 3. Theory assumptions and proofs
318
+
319
+ Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
320
+
321
+ # Answer: [NA]
322
+
323
+ Justification: Our paper does not involve theoretical results or their proofs.
324
+
325
+ # Guidelines:
326
+
327
+ - The answer NA means that the paper does not include theoretical results.
328
+ - All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.
329
+ - All assumptions should be clearly stated or referenced in the statement of any theorems.
330
+ - The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.
331
+ - Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.
332
+ - Theorems and Lemmas that the proof relies upon should be properly referenced.
333
+
334
+ # 4. Experimental result reproducibility
335
+
336
+ Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
337
+
338
+ # Answer: [Yes]
339
+
340
+ Justification: We provide the network architecture and the training strategy in Sec. 3, the dataset usage and hyperparameter settings in Sec. 4.1, and additional details in the Appendix, which fully disclose the information needed to reproduce the main experimental results.
341
+
342
+ # Guidelines:
343
+
344
+ - The answer NA means that the paper does not include experiments.
345
+ - If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.
346
+ - If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.
347
+ - Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.
348
+ - While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example
349
+
350
+ (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.
351
+ (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.
352
+ (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).
353
+ (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.
354
+
355
+ # 5. Open access to data and code
356
+
357
+ Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
358
+
359
+ Answer: [Yes]
360
+
361
+ Justification: We submit the code in the supplementary material, and all the datasets used are publicly available.
362
+
363
+ Guidelines:
364
+
365
+ - The answer NA means that paper does not include experiments requiring code.
366
+ - Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
367
+ - While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).
368
+ - The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
369
+ - The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.
370
+ - The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.
371
+ - At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).
372
+ - Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.
373
+
374
+ # 6. Experimental setting/details
375
+
376
+ Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
377
+
378
+ Answer: [Yes]
379
+
380
+ Justification: We provide all the training and test details, including datasets, implementation details, baselines, and evaluation metrics in Sec. 4.1.
381
+
382
+ Guidelines:
383
+
384
+ - The answer NA means that the paper does not include experiments.
385
+ - The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.
386
+ - The full details can be provided either with the code, in appendix, or as supplemental material.
387
+
388
+ # 7. Experiment statistical significance
389
+
390
+ Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
391
+
392
+ Answer: [No]
393
+
394
+ Justification: Our task is image generation rather than prediction. Same as existing 3D GAN inversion methods, we use metrics such as FID, ID, and LPIPS, which do not include error bars.
395
+
396
+ Guidelines:
397
+
398
+ - The answer NA means that the paper does not include experiments.
399
+ - The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.
400
+
401
+ - The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).
402
+ - The method for calculating the error bars should be explained (closed form formula call to a library function, bootstrap, etc.)
403
+ - The assumptions made should be given (e.g., Normally distributed errors).
404
+ - It should be clear whether the error bar is the standard deviation or the standard error of the mean.
405
+ - It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified.
406
+ - For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).
407
+ - If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.
408
+
409
+ # 8. Experiments compute resources
410
+
411
+ Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
412
+
413
+ Answer: [Yes]
414
+
415
+ Justification: We provide the execution times for various methods in Table 1 and specify the computational resources used in Sec. 4.1.
416
+
417
+ Guidelines:
418
+
419
+ - The answer NA means that the paper does not include experiments.
420
+ - The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.
421
+ - The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.
422
+ - The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper).
423
+
424
+ # 9. Code of ethics
425
+
426
+ Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
427
+
428
+ Answer: [Yes]
429
+
430
+ Justification: The research conducted in the paper conforms with the NeurIPS Code of Ethics.
431
+
432
+ Guidelines:
433
+
434
+ The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
435
+ - If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.
436
+ - The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).
437
+
438
+ # 10. Broader impacts
439
+
440
+ Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
441
+
442
+ Answer:[Yes]
443
+
444
+ Justification: We discuss the societal impacts in the Appendix.
445
+
446
+ Guidelines:
447
+
448
+ - The answer NA means that there is no societal impact of the work performed.
449
+
450
+ - If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.
451
+ - Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.
452
+ - The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.
453
+ - The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.
454
+ - If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).
455
+
456
+ # 11. Safeguards
457
+
458
+ Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
459
+
460
+ Answer: [Yes]
461
+
462
+ Justification: We describe the potential risks in the Appendix.
463
+
464
+ Guidelines:
465
+
466
+ - The answer NA means that the paper poses no such risks.
467
+ - Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.
468
+ - Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.
469
+ - We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.
470
+
471
+ # 12. Licenses for existing assets
472
+
473
+ Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
474
+
475
+ Answer: [Yes]
476
+
477
+ Justification: We properly cite all the papers involved in our work.
478
+
479
+ Guidelines:
480
+
481
+ - The answer NA means that the paper does not use existing assets.
482
+ - The authors should cite the original paper that produced the code package or dataset.
483
+ - The authors should state which version of the asset is used and, if possible, include a URI.
484
+ - The name of the license (e.g., CC-BY 4.0) should be included for each asset.
485
+ - For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.
486
+
487
+ - If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.
488
+ - For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.
489
+ - If this information is not available online, the authors are encouraged to reach out to the asset's creators.
490
+
491
+ # 13. New assets
492
+
493
+ Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
494
+
495
+ Answer: [NA]
496
+
497
+ Justification: We do not introduce new assets in this paper.
498
+
499
+ Guidelines:
500
+
501
+ - The answer NA means that the paper does not release new assets.
502
+ - Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.
503
+ - The paper should discuss whether and how consent was obtained from people whose asset is used.
504
+ - At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.
505
+
506
+ # 14. Crowdsourcing and research with human subjects
507
+
508
+ Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
509
+
510
+ Answer: [NA]
511
+
512
+ Justification: Our paper does not involve crowdsourcing nor research with human subjects.
513
+
514
+ Guidelines:
515
+
516
+ - The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
517
+ - Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.
518
+ - According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.
519
+
520
+ # 15. Institutional review board (IRB) approvals or equivalent for research with human subjects
521
+
522
+ Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
523
+
524
+ Answer: [NA]
525
+
526
+ Justification: The paper neither involves crowdsourcing nor research with human subjects.
527
+
528
+ Guidelines:
529
+
530
+ - The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
531
+ - Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.
532
+
533
+ We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.
534
+ - For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
535
+
536
+ # 16. Declaration of LLM usage
537
+
538
+ Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required.
539
+
540
+ Answer: [NA]
541
+
542
+ Justification: The core method development in this research does not involve LLMs as any important, original, or non-standard components.
543
+
544
+ Guidelines:
545
+
546
+ - The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components.
547
+ - Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described.
548
+
549
+ # A Additional Architecture Details
550
+
551
+ Detailed Structure of SVINet. LaMa [37] introduces fast Fourier convolutions (FFCs) [8] into image inpainting, achieving a receptive field that covers the whole image even in the early network layers. Such a way can facilitate the inpainting of large missing areas. To effectively fill in the occluded regions of the warped image, our SVINet is built upon the framework of LaMa and consists of three sub-networks: $N_{E}$ , $N_{I}$ , and $N_{D}$ . For an input image with the size of $512 \times 512$ , $N_{E}$ includes 3 downsampling convolutional layers that downsample the input image to a feature map with the size of $64 \times 64$ ; $N_{I}$ contains 9 FFC residual blocks, each of which consists of two FFCs and a residual connection, for inpainting; and $N_{D}$ consists of 3 upsampling convolutional layers to upsample the image resolution back to the size of $512 \times 512$ . The convolutions in $N_{I}$ and $N_{D}$ are modulated by the latent code $w^{+}$ from the 3D GAN inversion encoder $\mathrm{E}_{w^{+}}$ . Note that, each FFC contains three convolutional branches and one spectral transform branch, and the convolutions within the spectral transform are also modulated, as shown in Fig. 6.
552
+
553
+ ![](images/a54dd6113cec3dcf58f027eb602ca39dd42c5f72bd9605bb057e469c042d7813.jpg)
554
+ Figure 6: The detailed structure of fast Fourier convolution modulated by the latent code $w^{+}$ .
555
+
556
+ ![](images/03eeda9f4845d725c58dc85ef2ef69aac6ed1f83570b15350287bdbc57d53c34.jpg)
557
+
558
+ # B Additional Implementation Details
559
+
560
+ # B.1 Principles of Neural Radiance Fields
561
+
562
+ Neural Radiance Fields (NeRF) [27] employs a fully-connected deep network, which maps a 3D spatial location $\mathbf{x}$ and a viewing direction $\mathbf{d}$ to color $\mathbf{c}$ and density $\sigma$ , to represent a scene. By querying $\mathbf{x}$ and $\mathbf{d}$ along camera rays and applying classical volume rendering techniques [18], the color and density information can be projected into a 2D image. Specifically, for each projected ray $\mathbf{r}$ corresponding to a given pixel, $N_{s}$ points (denoted as $\{t_i\}_{i=1}^{N_s}$ ) are sampled along the ray. For each sampled point, the estimated color and density are represented as $\mathbf{c}_i$ and $\sigma_i$ , respectively. The RGB value $C(\mathbf{r})$ for each ray can then be computed via volumetric rendering as follows:
563
+
564
+ $$
565
+ C (\mathbf {r}) = \sum_ {i = 1} ^ {N _ {s}} T _ {i} \left(1 - \exp \left(- \sigma_ {i} \delta_ {i}\right)\right) \mathbf {c} _ {i}, \tag {11}
566
+ $$
567
+
568
+ where $T_{i} = \exp (-\sum_{j = 1}^{i - 1}\sigma_{j}\delta_{j})$ , and $\delta_i = t_{i + 1} - t_i$ denotes the distance between adjacent samples.
569
+
570
+ Similarly, if we replace the color $c_{i}$ of each sampled point with the distance $t_{i}$ from the sampling point to the camera during volumetric rendering, the depth $d(\mathbf{r})$ along each ray can be obtained as
571
+
572
+ $$
573
+ d (\mathbf {r}) = \sum_ {i = 1} ^ {N _ {s}} T _ {i} \left(1 - \exp \left(- \sigma_ {i} \delta_ {i}\right)\right) t _ {i}. \tag {12}
574
+ $$
575
+
576
+ # B.2 Multi-View Optimization for Editing
577
+
578
+ Our WarpGAN synthesizes novel view images not only based on the results of 3D GAN inversion but also relies on the warping results of the input image. Thus, only modifying the latent code within our method is difficult to achieve desirable editing effects. Inspired by HFGI3D [43], we employs WarpGAN to generate $N$ novel view images $\{\mathbf{I}_i\}_{i=1}^N$ corresponding to $N$ different camera poses $\{c_i\}_{i=1}^N$ to assist the optimization process of PTI [34], denoted as WarpGAN-Opt.
579
+
580
+ Specifically, for a single input image $\mathbf{I}$ with the camera pose $c$ , we first employ an optimization-based GAN inversion method [1] to jointly optimize the latent code $w^{+}$ and the noise vector $n$ in the 3D GAN generator:
581
+
582
+ $$
583
+ w _ {o p t} ^ {+}, n = \underset {w ^ {+}, n} {\arg \min } \mathcal {L} _ {2} (\mathcal {R} (\mathbf {G} (w ^ {+}, n; \theta), c), \mathbf {I}) + \lambda_ {n} \mathcal {L} _ {n} (n), \tag {13}
584
+ $$
585
+
586
+ where $\mathcal{L}_n$ is a noise regularization term and $\lambda_{n}$ is a hyperparameter [34].
587
+
588
+ Subsequently, we fix the optimized latent code $w_{opt}^{+}$ and fine-tune the 3D GAN generator based on the input image I and a series of novel view images synthesized by our WarpGAN:
589
+
590
+ $$
591
+ \theta^ {*} = \arg \min _ {\theta} \mathcal {L} _ {\mathrm {G}} \left(\mathcal {R} \left(\mathbf {G} \left(w _ {o p t} ^ {+}; \theta\right), c\right), \mathbf {I}\right) + \lambda_ {m v} \sum_ {i} ^ {N} \mathcal {L} _ {\mathrm {G}} \left(\mathcal {R} \left(\mathbf {G} \left(w _ {o p t} ^ {+}; \theta\right), c _ {i}\right), \mathbf {I} _ {i}\right), \tag {14}
592
+ $$
593
+
594
+ $$
595
+ \mathcal {L} _ {\mathrm {G}} = \lambda_ {2} ^ {\mathrm {G}} \mathcal {L} _ {2} + \lambda_ {\text {L P I P S}} ^ {\mathrm {G}} \mathcal {L} _ {\text {L P I P S}}, \tag {15}
596
+ $$
597
+
598
+ where $\lambda_{mv}$ is set to 1.0; both $\lambda_2^G$ and $\lambda_{\mathrm{LPIPS}}^{\mathrm{G}}$ are set to 1.0.
599
+
600
+ After the aforementioned process, we obtain the optimized latent code $w_{opt}^{+}$ and the 3D GAN generator with tuned weights $\theta^{*}$ . To generate attribute-edited images from different viewpoints, we simply modify $w_{opt}^{+}$ [31, 36], specify the desired camera pose $c_{novel}$ , and feed them into the 3D GAN to obtain the edited image $\hat{\mathbf{I}}_{novel}^{edit}$ in the novel view, that is,
601
+
602
+ $$
603
+ \mathbf {\hat {I}} _ {n o v e l} ^ {e d i t} = \mathcal {R} \left(\mathrm {G} \left(w _ {o p t} ^ {+} + \alpha \mathbf {n} _ {a t t}; \theta^ {*}\right), c _ {n o v e l}\right), \tag {16}
604
+ $$
605
+
606
+ where $\mathbf{n}_{att}$ denotes a specific direction for attribute editing and $\alpha$ is a scaling factor.
607
+
608
+ # C Broader Impacts
609
+
610
+ Our proposed method, which enables novel view synthesis and attribute editing of faces from a single image, holds the potential to significantly impact various fields such as film, gaming, augmented reality (AR), and virtual reality (VR). However, it also raises concerns regarding privacy and ethics, particularly the risk of generating "deep fakes". We emphasize the necessity of implementing robust safeguards to ensure the responsible and ethical application of this technology, thereby minimizing the risk of misuse.
611
+
612
+ # D Additional Qualitative Results
613
+
614
+ Additional Qualitative Evaluation. We provide more visual comparisons between our WarpGAN and several state-of-the-art methods in Fig. 7. In addition, since we utilize multi-view images synthesized by WarpGAN to assist 3D GAN inversion optimization for editing, we also include comparisons with this optimization-based method (WarpGAN-Opt). We can see that, due to the limitations of the low bit-rate latent code, WarpGAN-Opt loses some detail compared with WarpGAN. However, by leveraging the high-quality novel view images synthesized by WarpGAN, WarpGAN-Opt achieves higher fidelity and realism in novel view synthesis than other optimization-based methods. From the figure, it can be observed that our method outperforms Dual Encoder [4]. However, since our method relies on the visible regions of the input image in the novel view to inpaint occluded regions, our method degrades to a typical encoder-based 3D GAN inversion when the view change is large and the visible region is small. In contrast, Dual Encoder focuses on high-fidelity 3D head reconstruction and thus offers greater flexibility in terms of view changes.
615
+
616
+ Additional Attribute Editing Results. To more comprehensively demonstrate the capability of our method in image attribute editing, we provide additional attribute editing results in Fig. 8. Specifically,
617
+
618
+ ![](images/3e12291841753910a5edb68534e134efa63d6ac639cd118f31d1811d4760a5b0.jpg)
619
+ Figure 7: Qualitative comparisons between our WarpGAN and several state-of-the-art methods.
620
+
621
+ we employ InterFaceGAN [36] for editing the "Anger", "Old", and "Young" attributes, and utilize the text-guided semantic editing method StyleCLIP [31] for editing the "Elsa" and "Surprised" attributes.
622
+
623
+ Reference-Based Style Editing. In our WarpGAN, the latent code plays a crucial role in controlling the inpainting process of SVINet. To more explicitly analyze the influence of the latent code, we
624
+
625
+ ![](images/fd7e765fbab9e4f290918d663e7cd34960763c550a3cf2567764ca90a7f722fa.jpg)
626
+ Input
627
+
628
+ ![](images/7a0ff5779f25bebc8354ba96e062c89b3a6268c3de6c086cba3a90104d4bbfd8.jpg)
629
+ Elsa Anger Old Young Surprised
630
+
631
+ ![](images/c61ba58a4fe51577c90ac2ab777757fdfbc5d39d666e927de5fefa34f7c1de83.jpg)
632
+ Input
633
+
634
+ ![](images/3e65fe0e419d87c1bd359b8ddb61b99f23bf380e3c4e24d8c5e9e3c33d836cb2.jpg)
635
+ Elsa Anger Old Young Surprised
636
+
637
+ ![](images/1a0036902951f972236c3a737d896a242d8276b136a737bc7d1b051a2c33874d.jpg)
638
+ Input
639
+
640
+ ![](images/4e8ce900a1b5ef357ac6e82fafef39c89a4429e1a484bd1b98043d5feb7ed509.jpg)
641
+ Elsa Anger Old Young Surprised
642
+
643
+ ![](images/7587c97eabf1b0b7e874c0317d401b3d5bd65f16c9786a1a3c7e1c22beea550f.jpg)
644
+ Input
645
+
646
+ ![](images/baff61a0bcc872a9672f99c384fe2f1bf4e925d734f37bac2f3e5e7c74a694ea.jpg)
647
+ Elsa Anger Old Young Surprised
648
+ Figure 8: Image attribute editing results obtained by our method. The edited attributes include "Elsa", "Anger", "Old", "Young", and "Surprised".
649
+
650
+ ![](images/c9bf22dc8a0b23586c809017258dfc600ff0f7a0b558a853282987c5eeeb404b.jpg)
651
+ Input
652
+
653
+ ![](images/1b6705855fc2609d6d4119f1a3475d90f684b0bb37653868bbffa68dae2c3f87.jpg)
654
+ Elsa Anger Old Young Surprised
655
+
656
+ ![](images/15e32903d97450134642b2d28e9e092ac837c8c9b487d4c1d2b8044f184c77ca.jpg)
657
+ Input
658
+
659
+ ![](images/bef4cc9019b79bc1e4e296d97e4407dc9b874a0ec81c2ed091990e852effa042.jpg)
660
+ Elsa Anger Old Young Surprised
661
+
662
+ ![](images/ba3ab64018b9180ccc6c58ac241ffa5b7b20b36ad477af069e46d7bb74039f82.jpg)
663
+ Input
664
+
665
+ ![](images/e11d3f25d5c8a3040637cafb3c4056b7ec03961639894973a851caa4ee83d9f5.jpg)
666
+ Elsa Anger Old Young Surprised
667
+
668
+ ![](images/91fb526f21a9800844c49e4b98bd2249f8a51a332a335a53d70ebdb33d616764.jpg)
669
+ Input
670
+
671
+ ![](images/205c6c62a6ee448bfe0cffd5a849f3df9eb8269f66e9b160aac1afce82c16636.jpg)
672
+ Elsa Anger Old Young Surprised
673
+
674
+ ![](images/9304cca042c583023fb7511cdf23bb2e447ec4e5d6c6add09e9a2a54f8edb463.jpg)
675
+ Figure 9: Reference-based style editing. Each row represents the editing results of the same source image corresponding to different reference images, where the source and reference images are identical along the diagonal.
676
+
677
+ perform experiments by replacing the latent code of the input image during the inpainting process. Specifically, for the source image $\mathbf{I}_s$ with the camera pose $c_{s}$ and the latent code $w_{s}^{+}$ , we replace them with the camera pose $c_{r}$ and the latent code $w_{r}^{+}$ of the reference image $\mathbf{I}_r$ during inpainting, thereby achieving simultaneous editing of view and style. The results are given in Fig. 9.
678
+
679
+ For our SVINet, $w^{+}$ modulates the convolutions in both $N_{I}$ and $N_{D}$ , where $N_{I}$ processes feature maps at a resolution of $64 \times 64$ , and $N_{D}$ processes feature maps at resolutions ranging from $64 \times 64$ to $512 \times 512$ . According to the characteristics of StyleGAN [20, 21], the latent code corresponding to feature maps at resolutions of $64 \times 64$ and above primarily controls the detailed features of the image, such as the color scheme and microstructure. From Fig. 9, we observe that the main changes are in the skin tone and hair color of the face.
680
+
681
+ Qualitative Evaluation in the Cat Domain. To further validate the generalization capability of our method, we evaluate it in the cat domain. Specifically, we use the AFHQ-CAT dataset [9] for training and evaluation. Following e4e [38], we use a ResNet50 network [16] trained with MOCOv2 [7] instead of the pre-trained ArcFace network [12] to compute the identity loss in the non-facial domains during training. As shown in Fig. 10, our method can generalize well to the cat domain and perform novel view synthesis as well as attribute editing.
682
+
683
+ ![](images/d35b74817874d382b8854dc2c3bc0476590b01a07529a9fca75679f27a349460.jpg)
684
+ Figure 10: Novel view synthesis and attribute editing on cat faces by our method. We visualize the novel view synthesis results of WarGAN and WarpGAN-Opt, as well as the editing results of the attributes "Color" and "Small Eyes".
NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb286c6310d88d90f18ceece8a7c0e145e4f49679a17fd7035dd299d28c2a7f1
3
+ size 1827764
NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:078a33e327645c709a8445316f0bb487d140e6d11e8bdd718e8c5619ab5b38ac
3
+ size 840965
NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/6ebde3fc-a9da-44da-b825-48b8ec933d4f_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02d1686b33682c3d782ca09a541e7d29c103e20d3965e4a1d47b30613b4204b7
3
+ size 403699
NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/6ebde3fc-a9da-44da-b825-48b8ec933d4f_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae09d4dd7721a56d5814edc72c64c1756c4a9afa1bd855e25ddae95e88cb6d74
3
+ size 471168
NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/6ebde3fc-a9da-44da-b825-48b8ec933d4f_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60a9af97bc17b3ada5bf0ddaa99ea32e2407554afa79b594ad14e7c015f791a4
3
+ size 723569
NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/full.md ADDED
The diff for this file is too large to render. See raw diff
 
NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6fe05956f4b0d8f7ede8c782ffa7150c58683af61dbb79daeeec5c67a7706771
3
+ size 3151630
NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f64eb1947955ddd9338dd35ce3de298774aec3b9485f2655ee6106484709b7c
3
+ size 2165647
NeurIPS/2025/Wasserstein Transfer Learning/63f6d0c7-3853-4ed7-9971-594977c35050_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e700717506f78ef9bbad6207cb027dc57492edc09f61ede103498bac692f23c
3
+ size 241166
NeurIPS/2025/Wasserstein Transfer Learning/63f6d0c7-3853-4ed7-9971-594977c35050_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05c0afb8149419b0d22a00a4deb8e4a44108548f0b162fe18595585a5238ef33
3
+ size 295523
NeurIPS/2025/Wasserstein Transfer Learning/63f6d0c7-3853-4ed7-9971-594977c35050_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dfa95eba7bc5d107127d6c3d372a8c986e63f2a5723b64f196c5959d5dc490e2
3
+ size 621679
NeurIPS/2025/Wasserstein Transfer Learning/full.md ADDED
The diff for this file is too large to render. See raw diff
 
NeurIPS/2025/Wasserstein Transfer Learning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26536314fdb0b5cc9b7a26fa7c0082ba9f1494d8cbd9f17a50eab5732a32b3a8
3
+ size 1316131
NeurIPS/2025/Wasserstein Transfer Learning/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27fddf45a079f183b277f43f923345426d8e7a0c824f5be101463ba82f0e8d7d
3
+ size 1465923
NeurIPS/2025/Watch and Listen_ Understanding Audio-Visual-Speech Moments with Multimodal LLM/85ec6fc7-4946-42fc-bcea-a7c0495ae730_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:085a569e7d1576b2e3d5ce0683b2e2c24a9fe90e6e3914b5e977752a420b1545
3
+ size 171196
NeurIPS/2025/Watch and Listen_ Understanding Audio-Visual-Speech Moments with Multimodal LLM/85ec6fc7-4946-42fc-bcea-a7c0495ae730_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a6f6cdb2109e06ce53557feb9b4c44004c4cef92fe4c17366f5e2637657e165
3
+ size 222100