diff --git a/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/2b1bd396-d126-4238-bc3e-568c8451ac73_content_list.json b/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/2b1bd396-d126-4238-bc3e-568c8451ac73_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..719791bb8608fd8b2cc185c98295dd51bc1e2438 --- /dev/null +++ b/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/2b1bd396-d126-4238-bc3e-568c8451ac73_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c31941e5f524bcdbc91e2393d86ea6ccc2c2ac35661823da7d1b5638f615b30 +size 158298 diff --git a/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/2b1bd396-d126-4238-bc3e-568c8451ac73_model.json b/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/2b1bd396-d126-4238-bc3e-568c8451ac73_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2205dd365b457cdeec82684ccb47ccfdcc24f5b8 --- /dev/null +++ b/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/2b1bd396-d126-4238-bc3e-568c8451ac73_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:29d5e05111674857d524122408ba064f30b2027c0a2d8793324cdbe72a2f0a80 +size 206942 diff --git a/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/2b1bd396-d126-4238-bc3e-568c8451ac73_origin.pdf b/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/2b1bd396-d126-4238-bc3e-568c8451ac73_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..41e49b3198eb667a30e9733dca0a0a36cfd39b5e --- /dev/null +++ b/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/2b1bd396-d126-4238-bc3e-568c8451ac73_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:91663ec93bb14dbec7de48a4704788f761a09939b1370261d28c6ef3c47fac35 +size 46210820 diff --git a/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/full.md b/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/full.md new file mode 100644 index 0000000000000000000000000000000000000000..ed416a28572294b0930f589914e47c0244edd04a --- /dev/null +++ b/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/full.md @@ -0,0 +1,759 @@ +# WMCopier: Forging Invisible Image Watermarks on Arbitrary Images + +Ziping Dong $^{1}$ Chao Shuai $^{1}$ Zhongjie Ba $^{1,2*}$ Peng Cheng $^{1,2}$ Zhan Qin $^{1,2}$ Qinglong Wang $^{1,2}$ Kui Ren $^{1,2}$ + +1The State Key Laboratory of Blockchain and Data Security, Zhejiang University +$^{2}$ Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security Hangzhou, Zhejiang, China + +{dongziping,chaoshuai,zhongjieba,pengcheng,qinzhan,qinglong.wang,kuiren}@zju.edu.cn + +# Abstract + +Invisible Image Watermarking is crucial for ensuring content provenance and accountability in generative AI. While Gen-AI providers are increasingly integrating invisible watermarking systems, the robustness of these schemes against forgery attacks remains poorly characterized. This is critical, as forging traceable watermarks onto illicit content leads to false attribution, potentially harming the reputation and legal standing of Gen-AI service providers who are not responsible for the content. In this work, we propose WMCopier, an effective watermark forgery attack that operates without requiring any prior knowledge of or access to the target watermarking algorithm. Our approach first models the target watermark distribution using an unconditional diffusion model, and then seamlessly embeds the target watermark into a non-watermarked image via a shallow inversion process. We also incorporate an iterative optimization procedure that refines the reconstructed image to further trade off the fidelity and forgery efficiency. Experimental results demonstrate that WMCopier effectively deceives both open-source and closed-source watermark systems (e.g., Amazon's system), achieving a significantly higher success rate than existing methods2. Additionally, we evaluate the robustness of forged samples and discuss the potential defenses against our attack. Code is available at: https://github.com/holdrain/WMCopier. + +# 1 Introduction + +As generative models raise concerns about the potential misuse of such technologies for generating misleading or fictitious imagery [1], watermarking techniques have become a key solution for embedding traceable information into generated content, ensuring its provenance [2]. Driven by government initiatives [3], AI companies, including Google and Amazon, are increasingly adopting invisible watermarking techniques for their generated content [4, 5], owing to the benefits of imperceptibility and robustness [6, 7]. + +However, existing invisible watermark systems are vulnerable to diverse attacks, including detection evasion [8, 9] and forgery [10, 11]. Although the former has received considerable research attention, forgery attacks remain poorly explored. Forgery attacks, where non-watermarked content is falsely detected as watermarked, pose a significant challenge to the reliability of watermarking systems. These attacks maliciously attribute harmful watermarked content to innocent parties, such as Generative AI (Gen-AI) service providers, damaging the reputation of providers [12, 13]. + +Existing watermark forgery attacks are broadly categorized into two scenarios: the black-box setting and the no-box setting. In the black-box setting, the attacker has partial access to the watermarking system: such as knowledge of the specific watermarking algorithm [14], the ability to obtain paired data (clean images and their watermark versions) via the embedding interface [15, 16], or query access to the watermark detector [14]. However, such black-box access is unrealistic in practice, as the watermark embedding process is typically integrated into the generative service itself, rendering it inaccessible to end users, thus disabling paired data acquisition. Moreover, service providers rarely disclose the specific watermarking algorithms they employ [5]. Therefore, our focus is primarily on the no-box setting, where the attacker has neither knowledge of the watermarking algorithm nor access to its implementation, and only a collection of generated images with unknown watermarking schemes is available. Under this setting, Yang et al. [10] attempt to extract the watermark pattern by computing the mean residual between watermarked images and natural images from ImageNet [17], and then directly adding the estimated pattern to forged images at the pixel level. However, this achieves limited performance because it assumes that the watermark signal remains constant across all images. Moreover, its estimation is further hindered by the domain gap between ImageNet images and the unknown clean counterparts of the watermarked samples. + +Inspired by recent work [18-20], demonstrating that diffusion models serve as powerful priors capable of capturing complex data distributions, we ask a more exploratory question: + +# Can diffusion models act as copiers for invisible watermarks? + +To be more precise, can we leverage them to copy the underlying watermark signals embedded in watermarked images? + +Building on this insight, we propose WMCopier, a no-box watermark forgery attack framework tailored for practical adversarial scenarios. In this setting, the attacker has no prior knowledge of the watermarking scheme used by the provider and only has access to watermarked content generated by the Gen-AI service. Specifically, we first train an unconditional diffusion model on watermarked images to capture their underlying distribution. Then, we perform a shallow inversion to map clean images to their latent representations, followed by a denoising process that injects the watermark signal utilizing the trained diffusion model. To further mitigate artifacts introduced during inversion, we propose a refinement procedure that jointly optimizes image quality and alignment with the target watermark distribution. + +To evaluate the effectiveness of WMCopier, we perform comprehensive experiments across a range of watermarking schemes, including a closed-source one (Amazon's system). Experimental results demonstrate that our attack achieves a high forgery success rate while preserving excellent visual fidelity. Furthermore, we conduct a comparative robustness analysis between genuine and forged watermarks. Finally, we explore a multi-message defense strategy that provides practical guidance for improving future watermark design and deployment. + +Our key contributions are summarized as follows: + +- We propose WMCopier, the first no-box watermark forgery attack based on diffusion models, which forges watermark signals directly from watermarked images without requiring any knowledge of the watermarking scheme. +- We introduce a shallow inversion strategy and a refinement procedure, which injects the target watermark signal into arbitrary clean images while jointly optimizing image quality and conformity to the watermark distribution. +- Through extensive experiments, we demonstrate that WMCopier effectively forges a wide range of watermark schemes, achieving superior forgery success rates and visual fidelity including on Amazon's deployed watermarking system. +- We explore a potential defense strategy that provides insights to improve future watermarking systems. + +# 2 Preliminary + +# 2.1 DDIM and DDIM Inversion + +DDIM. Diffusion models generate data by progressively adding noise in the forward process and then denoising from pure Gaussian noise during the reverse process. The forward diffusion process is + +modeled as a Markov chain, where Gaussian noise is gradually added to the data $x_0$ over time. At each time step $t$ , the noised sample $x_{t}$ can be obtained in closed form as: + +$$ +x _ {t} = \sqrt {\alpha_ {t}} x _ {0} + \sqrt {1 - \alpha_ {t}} \epsilon , \quad \epsilon \sim \mathcal {N} (0, \mathbb {I}) \tag {1} +$$ + +where $\alpha_{t}$ is the noise schedule, and $\epsilon$ is standard Gaussian noise. + +DDIM [21] is a deterministic sampling approach for diffusion models, enabling faster sampling and inversion through deterministic trajectory tracing. In DDIM sampling, the denoising process starts from Gaussian noise $x_{T} \sim \mathcal{N}(0,\mathbb{I})$ and proceeds according to: + +$$ +x _ {t - 1} = \sqrt {\alpha_ {t - 1}} \cdot \left(\frac {x _ {t} - \sqrt {1 - \alpha_ {t}}}{\sqrt {\alpha_ {t}}}\right) + \sqrt {1 - \alpha_ {t - 1}} \cdot \epsilon_ {\theta} (x _ {t}, t) \tag {2} +$$ + +for $t = T, T - 1, \dots, 1$ , eventually yielding the generated sample $x_0$ . Here, $\epsilon_{\theta}(x_t, t)$ denotes a neural network, which is trained to predict the noise added to $x_0$ at step $t$ during the forward process, by minimizing the following objective: + +$$ +\mathbb {E} _ {x _ {0} \sim p _ {\mathrm {d a t a}}, t \sim \mathcal {U} (1, T), \epsilon \sim \mathcal {N} (0, \mathbb {I})} \left[ \| \epsilon_ {\theta} (x _ {t}, t) - \epsilon \| _ {2} ^ {2} \right]. \tag {3} +$$ + +DDIM Inversion. DDIM inversion [22, 21] allows an image $x_0$ to be approximately mapped back to its corresponding latent representation $x_{t}$ at step $t$ by reversing the sampling trajectory. DDIM inversion has found widespread applications in computer vision, such as image editing [22, 23] and watermarking [24, 25]. We denote this inversion procedure from $x_0$ to $x_{t}$ as: + +$$ +x _ {t} = \operatorname {I n v e r s i o n} \left(x _ {0}, t\right). \tag {4} +$$ + +# 2.2 Invisible Image Watermarking + +Invisible image watermarking helps regulators and the public identify AI-generated content and trace harmful outputs (such as NSFW or misleading material) back to the responsible service provider, thus enabling accountability attribution. Specifically, the watermark message inserted by the service provider typically serves as a model identifier [26]. For example, Stability AI embeds the identifier StableDiffusionV1 by converting it into a bit string and encoding it as a watermark [27]. A list of currently deployed real-world watermarking systems is provided in Table 6 in Appendix B. + +Invisible image watermarking typically involves three stages: embedding, extraction, and verification. Given a clean (non-watermarked) image $x \in \mathbb{R}^{H \times W \times 3}$ and a binary watermark message $m \in \{0,1\}^K$ , the embedding process uses an encoder $E$ to produce a watermarked image: + +$$ +x ^ {w} = E (x, m). +$$ + +During the extraction stage, a detector $D$ attempts to recover the embedded message from $x^{w}$ : + +$$ +m ^ {\prime} = D (x ^ {w}). +$$ + +During the verification stage, the extracted message $m'$ is evaluated against the original message $m$ using a verification function $V$ , which measures their similarity in terms of bit accuracy. An image is considered watermarked if its bit accuracy exceeds a predefined threshold $\rho$ , where $\rho$ is typically selected to achieve a desired false positive rate (FPR). For instance, to achieve a FPR below 0.05 for a 40-bit message, $\rho$ should be set to $\frac{26}{40}$ , based on a Bernoulli distribution assumption [28]. Formally, the verification function is defined as: + +$$ +V \left(m, m ^ {\prime}, \rho\right) = \left\{ \begin{array}{l l} \text {W a t e r m a r k e d ,} & \text {i f B i t - A c c u r a c y} \left(m, m ^ {\prime}\right) \geq \rho ; \\ \text {N o n - W a t e r m a r k e d ,} & \text {o t h e r w i s e .} \end{array} \right. \tag {5} +$$ + +# 3 Threat Model + +In a watermark forgery attack, the attacker forges the watermark of a service provider onto clean images, including malicious or illegal content. As a result, these forged images may be incorrectly attributed to the service provider, leading to reputation harm and legal ramifications. + +Attacker's Goal. The attacker aims to produce a forged watermarked image $x^{f}$ that visually resembles a given clean image $x$ , yet is detected by detector $D$ as containing a target watermark + +![](images/0b29c24a62672776d960864ce9ad8499df838ccb31e2360ac31a9cb321912dad.jpg) +Figure 1: The pipeline of WMCopier. The WMCopier consists of three stages. In the first stage, an unconditional diffusion model is trained to estimate the watermark distribution. In the second stage, the estimated watermark is injected into a non-watermarked image using shallow inversion and denoising. Finally, a refinement procedure is applied to mitigate artifacts and ensure conformity to the target watermark distribution $p_w(x)$ . + +message $m$ . Specifically, visual consistency is required to retain the original (possibly harmful) semantic content and to avoid visible artifacts that may reveal the attack. + +Attacker's Capability. We consider a threat model under the no-box setting: + +- The attacker does not know the target watermarking scheme and its internal parameters. They have no access to embed watermarks into their own images and the corresponding detection pipeline. +- The attacker can collect a subset of watermarked images from AI-generated content platforms (e.g., PromptBase [29], PromptHero [30]) or directly use the target Gen-AI service. +- The attacker assumes a static watermarking scheme, i.e., the service provider does not alter the watermarking scheme during the attack period. + +# 4 WMCopier + +In this section, we introduce WMCopier, a watermark forgery attack pipeline consisting of three stages: (1) Watermark Estimation, (2) Watermark Injection, and (3) Refinement. An overview of the proposed framework is illustrated in Figure 1. + +# 4.1 Watermark Estimation + +Diffusion models are used to fit a plausible data manifold [21, 31, 32] by optimizing Equation 3. The noise predictor $\epsilon_{\theta}(x_t,t)$ approximates the conditional expectation of the noise: + +$$ +\epsilon_ {\theta} \left(x _ {t}, t\right) \approx \mathbb {E} [ \epsilon \mid x _ {t} ] := \hat {\epsilon} \left(x _ {t}\right), \tag {6} +$$ + +which effectively turns $\epsilon_{\theta}$ into a regressor for the conditional noise distribution. + +Now consider a clean image $x$ and its watermarked version $x^{w} = x + w(w)$ , where $w$ denotes the embedded watermark signal, which can also be interpreted as the perturbation introduced by the embedding process. During the forward diffusion process, we have: + +$$ +x _ {t} ^ {w} = \sqrt {\alpha_ {t}} (x + w) + \sqrt {1 - \alpha_ {t}} \epsilon = x _ {t} + \sqrt {\alpha_ {t}} w, \tag {7} +$$ + +where $x_{t}$ is the noisy version of the clean image at step $t$ . The presence of the additive term $\sqrt{\alpha_t} w$ implies that the input to the noise predictor carries a watermark-dependent shift. As a result, the predicted noise satisfies: + +$$ +\epsilon_ {\theta} \left(x _ {t} ^ {w}, t\right) = \hat {\epsilon} \left(x _ {t} ^ {w}\right) = \hat {\epsilon} \left(x _ {t} + \sqrt {\alpha_ {t}} w\right) \approx \hat {\epsilon} \left(x _ {t}\right) + \delta_ {t} (w), \tag {8} +$$ + +![](images/f3b8b12e6228e209242e224d040f348c9091b6e014d6f130bb92edd11500e45f.jpg) +Figure 2: Watermark detectability of four open-source watermarking schemes throughout the diffusion and denoising processes ( $T = 1000$ ). As a reference, the bit accuracy of non-watermarked images remains around 0.5. + +![](images/d65d2e10e1eb26542e5a354c4b68d1dda4bfdfc35ad5fbfa3539674b9e41be93.jpg) + +where $\delta_t(w)$ denotes the systematic prediction bias introduced by the watermark signal. These biases accumulate subtly at each denoising step, gradually steering the model's output distribution toward the watermarked distribution $p_w(x)$ . + +To exploit this behavior, we construct an auxiliary dataset $\mathcal{D}_{\mathrm{aux}} = \{x^w | x^w \sim p_w(x)\}$ , where each image contains an embedded watermark message $m$ . We then train an unconditional diffusion model $\mathcal{M}_{\theta}$ on $\mathcal{D}_{\mathrm{aux}}$ . + +Our goal is to obtain forged images $x^{f}$ with watermark signals while preserving the semantic content of a clean image $x$ . Therefore, given the pretrained model $\mathcal{M}_{\theta}$ and a clean image $x$ , we first apply DDIM inversion to obtain a latent representation $x_{T}$ : + +$$ +x _ {T} = \operatorname {I n v e r s i o n} (x, T). \tag {9} +$$ + +The latent representation retains semantic information about the clean image. Starting from $x_{T}$ , we apply the denoising process described in Equation 2 to obtain the forged image $x^{f}$ , where the bias in Equation 8 naturally guides the denoising process toward the distribution of watermarked images. + +# 4.2 Watermark Injection + +We observe that the reconstructed images with full-step inversion suffer from severe quality degradation, as illustrated in the top row of Figure 3. This phenomenon is attributed to the fact that the inversion of images tends to accumulate reconstruction errors when the input clean images are out of the training data distribution, especially as the inversion depth increases [22, 33, 21]. To mitigate this, we investigate the watermark detectability in watermarked images with four open-source watermarking methods throughout the diffusion and denoising processes. As illustrated in Figure 2, the watermark signal tends to be destroyed gradually during the shallow steps (e.g., $t \leq 400$ for $T = 1000$ ), Consequently, the watermark signal is restored during these denoising steps. + +Therefore, we propose a shallow inversion strategy that performs the inversion process up to an early timestep $T_{S} < T$ . By skipping deeper diffusion steps that contribute minimally to watermark injection yet substantially distort image semantics, our method effectively preserves the visual fidelity of reconstructed images while ensuring reliable watermark injection. + +# 4.3 Refinement + +Although shallow inversion effectively reduces reconstruction errors, forged images may still exhibit minor artifacts (as shown in Figure 3) that cause the forged images to be visually distinguishable, thus exposing the forgery. To address this, we propose a refinement procedure to adjust the forged image $x^{f}$ , defined as: + +$$ +x ^ {f (i + 1)} = x ^ {f (i)} + \eta \nabla_ {x ^ {f (i)}} \left[ \log p _ {w} \left(x ^ {f (i)}\right) - \lambda \| x ^ {f (i)} - x \| ^ {2} \right], i \in \{0, 1, \dots , L \} \tag {10} +$$ + +where $\eta$ is the step size, $\lambda$ balances semantic fidelity and watermark injection and $L$ is the optimization iterations. The log-likelihood $\log p_w(x^f)$ constrains the samples to lie in regions of high probability + +![](images/dafc93e601a1b9782ab576ce181f5be6e338698c2a26f7b4afe2451f22bd94cd.jpg) +Figure 3: Forged samples generated using full-step inversion, shallow inversion, and shallow inversion with refinement. The first row shows results from full-step inversion ( $T_{S} = T = 100$ ), where the semantic content of the original clean image is heavily disrupted. The second row corresponds to shallow inversion ( $T_{S} = 40$ , $T = 100$ ), which introduces only slight artifacts. The third row demonstrates shallow inversion with refinement, where these artifacts are further reduced. + +under the watermarked image distribution $p_w(x)$ , while the mean squared error (MSE) term $\| x^{f(i)} - x\| ^2$ ensures that the refined image remains similar to the clean image $x$ . Since the distribution $p_w(x)$ and the conditional noise distribution $p_w^t (x_t)$ are nearly identical at a low noise step $t_l$ , the score function $\nabla \log p_w(x)$ can be approximated by $\nabla \log p_w^t (x_t)$ . This score can be estimated using a pre-trained diffusion model $\mathcal{M}_{\theta}$ [34, 35], as defined in Equation 11, where $x_{t}^{f} = \sqrt{\alpha_{t}} x^{f} + \sqrt{1 - \alpha_{t}}\epsilon$ . + +$$ +\nabla_ {x ^ {f}} \log p _ {w} (x ^ {f}) \approx \nabla_ {x _ {t _ {l}} ^ {f}} \log p _ {w} ^ {t _ {l}} (x _ {t _ {l}} ^ {f}) \approx - \frac {1}{\sqrt {1 - \alpha_ {t _ {l}}}} \epsilon_ {\theta} (x _ {t _ {l}} ^ {f}, t _ {l}). \tag {11} +$$ + +By performing this refinement for $L$ iterations, we obtain the forged watermarked image $\hat{x_f}$ after the refinement process. This refinement improves both watermark detectability and the image quality of the forged images, as demonstrated in Figure 3 and Table 11. A complete overview of our WMCopier procedure is summarized in Algorithm 1. + +# 5 Evaluation + +Datasets. To simulate real-world watermark forgery scenarios, we train our diffusion model on AI-generated images and apply watermark forgeries to both AI-generated and real photographs. For AI-generated images, we use DiffusionDB [36] that contains a diverse collection of images generated by Stable Diffusion [37]. For real photographs, we adopt three widely-used datasets in computer vision: MS-COCO [38], ImageNet [17], and CelebA-HQ [39]. + +Watermarking Schemes. We evaluate four watermarking schemes: three post-processing methods—DWT-DCT [40], HiDDeN [41], and RivaGAN [42]—an in-processing method, Stable Signature [26], and a close-source watermark system, Amazon [4]. Each watermarking scheme is evaluated using its official default configuration. A comprehensive description of these methods is included in the Appendix C. + +Attack Parameters and Baselines. For the diffusion model, we adopt DDIM sampling DDIM sampling with a total step $T = 100$ and perform inversion up to step $T_{S} = 40$ . Further details regarding the training of the diffusion model are provided in the Appendix F. For the refinement procedure, we set the trade-off coefficient $\lambda$ as 100, the number of refinement iterations $L$ as 100, a low-noise step $t_{l}$ in the refinement as 1 and the step size $\eta$ as $1 \times 10^{-4}$ by default. To balance the attack performance and the potential cost of acquiring generated images (e.g., fees from GenAI services), we set the size of the auxiliary dataset $D_{\mathrm{aux}}$ to 5,000 in our main experiments. For comparison, we consider the method by Yang et al. [10] that operates under the same no-box setting as ours, and Wang et al. [16] that assumes a black-box setting with access to paired watermarked and clean images. + +
AttacksBlack BoxNo-BoxNo-Box
Wang et al. [16]Yang et al. [10]Ours
Watermark schemeDatasetPSNR↑Forged Bit-acc↑FPR@10-6↑PSNR↑Forged Bit-acc,↑FPR@10-6↑PSNR↑Forged Bit-acc,↑FPR@10-6↑
DWT-DCTMS-COCO31.3374.32%57.20%32.8753.08%0.50%33.6989.19%60.20%
CelebAHQ32.1981.29%50.70%32.9053.68%0.10%35.2989.46%53.20%
ImageNet30.1679.64%55.10%32.9251.96%0.20%33.7588.25%55.80%
Diffusiondb31.8778.22%50.80%32.9051.59%0.40%33.8485.17%54.30%
HiddeNMS-COCO31.0280.56%77.60%29.6863.12%0.00%31.7499.34%95.90%
CelebAHQ31.5782.28%80.20%29.7961.52%0.00%33.1298.08%92.50%
ImageNet31.2478.61%83.90%29.7862.66%0.00%31.7698.99%94.30%
Diffusiondb30.7479.99%79.20%29.6863.36%0.00%31.4698.83%94.60%
RivaGANMS-COCO32.9493.26%88.80%29.1250.80%0.00%34.0795.74%90.90%
CelebAHQ32.6493.67%93.80%29.2352.29%0.00%35.2898.61%96.00%
ImageNet33.1190.94%71.40%29.2250.92%0.00%33.8793.83%77.10%
Diffusiondb33.3189.69%80.60%29.1248.70%0.00%34.5090.43%84.80%
Stable SignatureMS-COCO28.8791.68%88.90%30.7752.67%0.00%31.2998.04%94.60%
CelebAHQ32.3379.90%90.10%30.5151.73%0.00%30.5496.04%100.00%
ImageNet29.5985.77%85.90%30.7551.59%0.00%31.3397.03%98.60%
Diffusiondb31.1189.24%92.10%30.6552.69%0.00%31.5996.24%96.60%
Average31.5084.32%76.64%30.6254.52%0.08%32.9494.58%83.71%
+ +Metrics. We evaluate the visual quality of forged images using Peak Signal-to-Noise Ratio (PSNR), defined as $\mathrm{PSNR}(x, \hat{x^f}) = -10 \cdot \log_{10}\left(\mathrm{MSE}(x, \hat{x^f})\right)$ , where $x$ is the clean image and $\hat{x_f}$ is the forged image after the refinement process. A higher PSNR indicates better visual fidelity, i.e., the forged image is more similar to the original. We evaluate the attack effectiveness in terms of bit accuracy and false positive rate (FPR). Bit accuracy measures the proportion of watermark bits in the extracted message that match the target. FPR refers to the rate at which forged samples are incorrectly identified as valid watermarked images. A higher FPR thus indicates a more successful attack. We report FPR at a threshold calibrated to yield a $10^{-6}$ false positive rate on clean images. + +# 5.1 Attacks on Open-Source Watermarking Schemes + +As shown in Table 5, our WMCopier achieves the highest forged bit accuracy and FPR across all watermarking schemes, even surpassing the baseline in the black-box setting. In terms of visual fidelity, all forged images exhibit a PSNR above 30dB, demonstrating that our WMCopier effectively achieves high image quality. For the frequency-domain watermarking DWT-DCT, the bit accuracy is slightly lower compared to other schemes. We attribute this to the inherent limitations of DWT-DCT, which originally exhibits low bit accuracy on certain images. A detailed analysis is presented in Appendix D.1. + +Table 1: Comparison of our WMCopier with two baselines on four open-source watermarking methods. The cells highlighted in indicate the highest values in each row for the corresponding metrics. Arrows indicate the desired direction of each metric (↑ for higher values being better). + +
Watermark SchemeAttackYang et al. [10]Ours
Amazon WMDatasetPSNR↑SR↑/Con.↑PSNR↑SR↑/Con.↑
Diffusiondb23.4229.0%/232.57100.0%/2.94
MS-COCO24.1832.0%/232.93100.0%/2.97
CelebA-HQ24.1042.0%/231.84100.0%/2.98
ImageNet23.9528.0%/232.8899.0%/2.89
+ +Table 2: Performance comparison of baseline and WMCopier on Amazon Watermark. + +![](images/b69f3ad513e70e01c53753c299afa1a231271dbbadc26b0a4e25d7512ffb1c26.jpg) +Figure 4: Comparison of forged bit accuracy distribution: Yang's method. vs. Ours. + +# 5.2 Attacks on Closed-Source Watermarking Systems + +In this subsection, we evaluate the effectiveness of our attack and Yang's method in attacking the Amazon watermarking scheme. The results are shown in Table 2. The success rate (SR), which represents the proportion of images detected as watermarked, and the confidence levels (Con.) returned by the API, are used to evaluate the effectiveness of the attacks on deployed watermarking systems. Compared with Yang's method, our attack achieves superior performance in terms of + +![](images/2cc3f446a7d9a0831cc5b07413d80f66ea9059e3fcb86bc509863c6ad0bcbc02.jpg) +Figure 5: Effect of refinement iterations $L$ (left) and trade off coefficient $\lambda$ (right) on PSNR and Bit-Accuracy under our forgery attacks, with fixed $\eta = 10^{-4}$ . + +![](images/3d664d6379ccc833d538189745ad29be3c2e576460f7922c589d3466468affb5.jpg) + +both visual fidelity and forgery effectiveness. Specifically, our method achieves an average PSNR exceeding 30dB and a success rate(SR) close to $100\%$ , whereas Yang's method typically results in PSNR values below 25dB and SR ranging from $28\%$ to $42\%$ . + +Furthermore, our forged images generally receive a confidence level of 3—the highest rating defined by Amazon's watermark detection API—while Yang's results consistently remain at level 2. Since Amazon does not disclose the exact computation of the confidence score, we guess that it may correlate with bit accuracy, based on common assumptions [28]. To further investigate this, we analyzed the distribution of forged bit accuracy of both our method and Yang's on a open-source watermarking scheme. As shown in Figure 4, our method achieves over $80\%$ bit accuracy on RivaGan, significantly outperforming Yang's method, which remains below $70\%$ . + +# 5.3 Ablation Study + +To evaluate the impact of parameter choices on image quality and forgery effectiveness, we conduct two sets of ablation studies by varying (i) the number of refinement optimization steps $L$ and (ii) the trade-off coefficient $\lambda$ . As shown in Figure 5, increasing $L$ initially improves both PSNR and forged bit accuracy, with performance saturating beyond $L = 100$ . In contrast, larger $\lambda$ values continuously enhance PSNR but lead to a slight degradation in bit accuracy, likely due to over-regularization. While higher PSNR values generally indicate better visual fidelity, we note that visible artefacts may still occur even at elevated PSNR levels. Nevertheless, since an attacker may prioritize forgery success over perceptual quality, we adopt $\lambda = 100$ in our main experiments. The results presented in Table 11 in Appendix E further validate the effectiveness of the refinement process. + +# 5.4 Robustness + +To investigate the robustness of the forged images, we evaluated its forged bit accuracy of genuine and forged watermarked images under common image distortions, including Gaussian noise, JPEG compression, Gaussian blur, and brightness adjustment. Since the Stable Signature does not support watermark embedding into arbitrary images, we instead report results on generated images. As shown in Table 3, the forged watermark generally exhibits slightly lower robustness compared to the genuine watermark. While some cases show over $20\%$ degradation (highlighted in red), relying on bit accuracy under distortion for separation is inadequate, as it would substantially compromise the true positive rate (TPR), as discussed in Appendix D.3. + +# 6 Related Work + +# 6.1 Image Watermarking + +Image watermarking techniques can generally be categorized into post-processing and in-processing methods, depending on when the watermark is embedded. + +Post-processing methods embed watermark messages into images after generation. Non-learning-based methods (e.g., LSB [43], DWT-DCT [40, 44]) suffer from poor robustness under common + +
Watermark schemeDistortion DatasetJPEGBlurGaussian NoiseBrightness
GenuineForgedGenuineForgedGenuineForgedGenuineForged
DWT-DCTMS-COCO56.44%53.00%59.84%56.56%67.86%66.90%54.66%58.36%
CelebAHQ55.42%53.14%63.12%58.26%64.84%66.49%53.89%57.73%
ImageNet56.08%52.31%59.37%54.39%68.27%67.60%54.08%57.37%
Diffusiondb58.16%53.23%62.12%55.74%66.90%64.43%54.73%56.83%
HiddenNMS-COCO58.68%58.06%78.50%71.95%54.13%49.55%82.40%78.99%
CelebAHQ57.05%55.07%79.83%69.07%48.94%46.02%83.63%73.21%
ImageNet58.86%57.83%78.20%71.34%54.10%49.57%80.95%77.40%
Diffusiondb58.57%57.61%79.69%72.89%54.41%50.19%81.53%77.66%
RivaGANMS-COCO99.44%93.32%99.60%94.99%85.71%75.00%84.51%78.81%
CelebAHQ99.92%97.22%99.97%98.23%85.93%74.83%84.60%79.53%
ImageNet98.95%92.00%99.28%93.89%84.95%74.74%82.77%77.25%
Diffusiondb96.56%84.85%97.27%86.96%77.33%66.27%79.14%71.65%
StableSignatureMS-COCO93.99%89.48%86.91%68.34%73.78%67.14%92.30%88.63%
CelebAHQ86.73%65.42%65.33%86.86%
ImageNet87.73%64.88%61.79%91.41%
Diffusiondb85.69%65.45%61.60%87.45%
+ +Table 3: Bit Accuracy of the genuine watermark and the forged watermark under various image distortions. The distortion parameters are: Gaussian Noise ( $\sigma = 0.05$ ), JPEG (quality=90), Blur (radius=1), and Brightness (factor=6). Cells with background indicate a degradation gap between $10\%$ and $20\%$ , and cells with background indicate a degradation gap greater than $20\%$ . + +distortions such as compression and noise. Neural network-based approaches mitigate these issues by combining encoder-decoder architectures and adversarial training [41, 45-47]. However, these methods often rely on heavy training and may generalize poorly to unknown attacks. + +In-processing methods embed watermarks during image generation, either by modifying training data or model weights [19, 48, 28], or by adjusting specific components such as diffusion decoders [26]. Recent trends explore semantic watermarking, which binds messages to generative semantics (e.g., Tree-Ring [49]; Gaussian shading [50]). However, semantic watermarking has not seen real-world deployment [14]. We discuss the effectiveness of our attack on the semantic watermarking in the Appendix D.2. + +# 6.2 Watermark Forgery + +Kutter et al. [51] first introduced the concept, also known as the watermark copy attack, under the assumption that the watermark signal was a fixed constant. While this assumption was reasonable for early handcrafted watermarking methods, it no longer holds for modern neural network-based schemes. Subsequent studies [52, 16, 53, 14] have investigated watermark forgery under either white-box or black-box settings, where the attacker either has full access to the watermarking model or can embed watermarks into their own images. However, these approaches still rely on strong assumptions that may not hold in realistic deployment scenarios. + +In contrast, the no-box setting assumes that only watermarked images are available to the attacker, without access to the model or embedding process. Yang et al. [10] proposed a heuristic method under this setting by estimating the watermark signal through averaging the residuals between watermarked and clean images, and subsequently re-embedding the estimated pattern at the pixel level. This is the scenario we focus on in this work, as it more accurately reflects practical constraints. + +# 7 Defense Analysis + +To enhance the deployed watermarking system, we suggest modifying the existing watermark system by disrupting the ability of diffusion models to model the watermark distribution effectively. Specifically, we propose a multi-message strategy as a simple yet effective countermeasure. Instead of embedding a fixed watermark message, the system randomly selects one from a predefined message pool $m_{1}, m_{2}, m_{3}, \ldots, m_{K}$ for each image. During detection, the detector verifies the presence of any valid message in the pool. This strategy introduces uncertainty into the watermark signal, increasing the entropy of possible watermark patterns and making it substantially more difficult for generative models to learn consistent features necessary for forgery. We implement this defense using different message pool sizes ( $K = 10, 50, 100$ ) and test on 100 images for simplicity. + +As shown in the Table 4, increasing the value of $K$ leads to the FPR drops to $0\%$ at $K = 50$ and $K = 100$ . We further strengthen our attack by collecting more watermarked images. Specifically, we collect 5,000, 20,000, and 50,000 watermarked samples to evaluate the effect of data volume on this defense. As shown in Table 12, the FPR remained consistently at $0\%$ even as the size of $D_{\mathrm{aux}}$ increased. Therefore, embedding multiple messages proves to be a simple yet effective countermeasure against our attack. + +Table 4: Performance comparison across different $K$ values. + +
DatasetK=10K=50K=100
PSNR↑Forged Bit-acc.↑FPR@10-6↑PSNR↑Forged Bit-acc.↑FPR@10-6↑PSNR↑Forged Bit-acc.↑FPR@10-6↑
MS-COCO34.7381.63%34.00%34.6269.78%0.00%34.8671.56%0.00%
CelebAHQ36.1383.41%44.00%35.8971.00%0.00%35.8772.91%0.00%
ImageNet34.5579.25%25.00%34.3570.09%0.00%34.5871.44%0.00%
Diffusiondb35.1476.28%17.00%35.1070.66%0.00%35.4072.28%0.00%
+ +Table 5: Performance comparison across datasets with a larger size of $D_{aux}$ for $K = 100$ . + +
Dataset50002000050000
PSNR↑Forged Bit-acc.↑FPR@10-6↑PSNR↑Forged Bit-acc.↑FPR@10-6↑PSNR↑Forged Bit-acc.↑FPR@10-6↑
MS-COCO34.8671.56%0.00%34.7871.91%0.00%30.7771.94%0.00%
CelebA-HQ35.8772.91%0.00%34.1572.97%1.00%27.9972.72%1.00%
ImageNet34.5871.44%0.00%34.5772.56%0.00%30.4772.19%0.00%
DiffusionDB35.4072.28%0.00%34.9972.34%0.00%31.1572.06%0.00%
+ +# 8 Conclusion + +We propose WMCopier, a diffusion model-based watermark forgery attack designed for the no-box setting, which leverages the diffusion model to estimate the target watermark distribution and performs shallow inversion to forge watermarks on a specific image. We also introduce a refinement procedure that improves both image quality and forgery effectiveness. Extensive experiments demonstrate that WMCopier achieves state-of-the-art performance on both open-source watermarking and real-world deployed systems. We explore potential defense strategies, a multi-message strategy, offering valuable insights for the future development of AIGC watermarking techniques. + +# 9 Acknowledge + +We sincerely thank our anonymous reviewers for their valuable feedback and Amazon AGI's Responsible team for their prompt response. This paper is supported in part by the National Key Research and Development Program of China(2021YFB3100300, 2023YFB2904000 and 2023YFB2904001), the National Natural Science Foundation of China(62441238, 62072395, U20A20178, 62172359 and 62472372), the Zhejiang Provincial Natural Science Foundation of China under Grant(LD24F020010), the Key Research and Development Program of Hangzhou City(2024SZD1A27), and the Key R&D Programme of Zhejiang Province(2025C02264). + +# References + +[1] Kayleen Devlin and Joshua Cheetham. Fake trump arrest photos: How to spot an ai-generated image. https://www.bbc.com/news/world-us-canada-65069316, 2023. +[2] Zhengyuan Jiang, Moyang Guo, Yuepeng Hu, and Neil Zhenqiang Gong. Watermark-based detection and attribution of ai-generated content. arXiv preprint arXiv:2404.04254, 2024. +[3] Diane Bartz and Krystal Hu. Openai, google, others pledge to watermark ai content for safety, white house says. https://www.reuters.com/technology/openai-google-others-pledge-watermark-ai-content-safety-white-house-2023-07-21/. +[4] Amazon. Watermark detection for amazon titan image generator now available in amazon bedrock. https://aws.amazon.com/cn/about-aws/whats-new/2024/04/watermark-detection-amazon-titan-image-generator-bedrock/, 2024. + +[5] Google Deepmind. Synthid: Identifying ai-generated content with synthid. https://deepmind.google/technologies/synthid/, 2023. +[6] Emilia David. Openai is adding new watermarks to dall-e 3. https://www.theverge.com/2024/2/6/24063954.ai-watermarks-dalle3-openai-content-credentials, 2024. +[7] Yusuf Mehdi. Announcing microsoft copilot, your everyday ai companion. https://blogs.microsoft.com/blog/2023/09/21/announcing-microsoft-copilot-your-everyday-ai-companion/, 2023. +[8] Zhengyuan Jiang, Jinghuai Zhang, and Neil Zhenqiang Gong. Evading watermark based detection of ai-generated content. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, pages 1168-1181, 2023. +[9] Xuandong Zhao, Kexun Zhang, Zihao Su, Saastha Vasan, Ilya Grishchenko, Christopher Kruegel, Giovanni Vigna, Yu-Xiang Wang, and Lei Li. Invisible image watermarks are provably removable using generative ai. Advances in Neural Information Processing Systems, 37:8643-8672, 2024. +[10] Pei Yang, Hai Ci, Yiren Song, and Mike Zheng Shou. Can simple averaging defeat modern watermarks? Advances in Neural Information Processing Systems, 37:56644-56673, 2024. +[11] Xuandong Zhao, Sam Gunn, Miranda Christ, Jaiden Fairoze, Andres Fabrega, Nicholas Carlini, Sanjam Garg, Sanghyun Hong, Milad Nasr, Florian Tramer, et al. Sok: Watermarking for ai-generated content. arXiv preprint arXiv:2411.18479, 2024. +[12] Vinu Sankar Sadasivan, Aounon Kumar, Sriram Balasubramanian, Wenxiao Wang, and Soheil Feizi. Can ai-generated text be reliably detected? arXiv preprint arXiv:2303.11156, 2023. +[13] Chenchen Gu, Xiang Lisa Li, Percy Liang, and Tatsunori Hashimoto. On the learnability of watermarks for language models. arXiv preprint arXiv:2312.04469, 2023. +[14] Andreas Müller, Denis Lukovnikov, Jonas Thietke, Asja Fischer, and Erwin Quiring. Black-box forgery attacks on semantic watermarks for diffusion models. arXiv preprint arXiv:2412.03283, 2024. +[15] Mehrdad Saberi, Vinu Sankar Sadasivan, Keivan Rezaei, Aounon Kumar, Atoosa Chegini, Wenxiao Wang, and Soheil Feizi. Robustness of ai-image detectors: Fundamental limits and practical attacks. arXiv preprint arXiv:2310.00076, 2023. +[16] Ruowei Wang, Chenguo Lin, Qijun Zhao, and Feiyu Zhu. Watermark faker: towards forgery of digital image watermarking. In 2021 IEEE International Conference on Multimedia and Expo (ICME), pages 1-6. IEEE, 2021. +[17] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015. doi: 10.1007/s11263-015-0816-y. +[18] Nicolas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramer, Borja Balle, Daphne Ippolito, and Eric Wallace. Extracting training data from diffusion models. In 32nd USENIX Security Symposium (USENIX Security 23), pages 5253-5270, 2023. +[19] Ning Yu, Vladislav Skripniuk, Dingfan Chen, Larry S Davis, and Mario Fritz. Responsible disclosure of generative models using scalable fingerprinting. In International Conference on Learning Representations, 2021. +[20] Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Ngai-Man Cheung, and Min Lin. A recipe for watermarking diffusion models. arXiv preprint arXiv:2303.10137, 2023. +[21] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. + +[22] Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6038–6047, 2023. +[23] Xuan Ju, Ailing Zeng, Yuxuan Bian, Shaoteng Liu, and Qiang Xu. Direct inversion: Boosting diffusion-based editing with 3 lines of code. arXiv preprint arXiv:2310.01506, 2023. +[24] Wenda Li, Huijie Zhang, and Qing Qu. Shallow diffuse: Robust and invisible watermarking through low-dimensional subspaces in diffusion models. arXiv preprint arXiv:2410.21088, 2024. +[25] Huayang Huang, Yu Wu, and Qian Wang. Robin: Robust and invisible watermarks for diffusion models with adversarial optimization. Advances in Neural Information Processing Systems, 37: 3937-3963, 2024. +[26] Pierre Fernandez, Guillaume Couairon, Hervé Jégou, Matthijs Douze, and Teddy Furon. The stable signature: Rooting watermarks in latent diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22466-22477, 2023. +[27] StabilityAI. Stable diffusion github repository. https://github.com/Stability-AI/stablediffusion. +[28] Nils Lukas and Florian Kerschbaum. {PTW}: Pivotal tuning watermarking for {Pre-Trained} image generators. In 32nd USENIX Security Symposium (USENIX Security 23), pages 2241-2258, 2023. +[29] Promptbase. https://promptbase.com/, 2024. +[30] Prompthero. https://prompthero.com/midjourney-prompts, 2024. +[31] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780-8794, 2021. +[32] John J. Vastola. Generalization through variance: how noise shapes inductive biases in diffusion models. arXiv preprint arXiv:2504.12532, 2025. +[33] Daniel Garibi, Or Patashnik, Andrey Voynov, Hadar Averbuch-Elor, and Daniel Cohen-Or. Renoise: Real image inversion through iterative noising. In European Conference on Computer Vision, pages 395–413. Springer, 2024. +[34] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019. +[35] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020. +[36] Zijie J Wang, Evan Montoya, David Munechika, Haoyang Yang, Benjamin Hoover, and Duen Horng Chau. Diffusiondb: A large-scale prompt gallery dataset for text-to-image generative models. arXiv preprint arXiv:2210.14896, 2022. +[37] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. +[38] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740-755. Springer, 2014. +[39] Huaibo Huang, Ran He, Zhenan Sun, Tieniu Tan, et al. Introvae: Introspective variational autoencoders for photographic image synthesis. Advances in neural information processing systems, 31, 2018. + +[40] Ali Al-Haj. Combined dwt-dct digital image watermarking. Journal of computer science, 3(9): 740-746, 2007. +[41] Jiren Zhu, Russell Kaplan, Justin Johnson, and Li Fei-Fei. Hidden: Hiding data with deep networks. In Proceedings of the European conference on computer vision (ECCV), pages 657–672, 2018. +[42] Kevin Alex Zhang, Lei Xu, Alfredo Cuesta-Infante, and Kalyan Veeramachaneni. Robust invisible video watermarking with attention. arXiv preprint arXiv:1909.01285, 2019. +[43] Deepshikha Chopra, Preeti Gupta, Gaur Sanjay, and Anil Gupta. Lsb based digital image watermarking for gray scale image. IOSR Journal of Computer Engineering, 6(1):36-41, 2012. +[44] K. A. Navas, Mathews Cheriyan Ajay, M. Lekshmi, Tampy S. Archana, and M. Sasikumar. DWT-DCT-SVD based watermarking. In 2008 3rd International Conference on Communication Systems Software and Middleware and Workshops (COMSWARE '08), pages 271-274. IEEE, January 2008. +[45] Matthew Tancik, Ben Mildenhall, and Ren Ng. Stegastamp: Invisible hyperlinks in physical photographs. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2117-2126, 2020. +[46] Han Fang, Zhaoyang Jia, Zehua Ma, Ee-Chien Chang, and Weiming Zhang. PIMoG: An Effective Screen-shooting Noise-Layer Simulation for Deep-Learning-Based Watermarking Network. In Proceedings of the 30th ACM International Conference on Multimedia, pages 2267-2275, Lisboa Portugal, October 2022. ACM. +[47] Zhaoyang Jia, Han Fang, and Weiming Zhang. Mbrs: Enhancing robustness of dnn-based watermarking by mini-batch of real and simulated jpeg compression. In Proceedings of the 29th ACM international conference on multimedia, pages 41-49, 2021. +[48] Ning Yu, Vladislav Skripniuk, Sahar Abdelnabi, and Mario Fritz. Artificial Fingerprinting for Generative Models: Rooting Deepfake Attribution in Training Data. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 14428-14437, Montreal, QC, Canada, October 2021. IEEE. ISBN 978-1-66542-812-5. doi: 10.1109/ICCV48922.2021.01418. +[49] Yuxin Wen, John Kirchenbauer, Jonas Geiping, and Tom Goldstein. Tree-ring watermarks: Fingerprints for diffusion images that are invisible and robust. arXiv preprint arXiv:2305.20030, 2023. +[50] Zijin Yang, Kai Zeng, Kejiang Chen, Han Fang, Weiming Zhang, and Nenghai Yu. Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models, May 2024. Comment: 17 pages, 11 figures, accepted by CVPR 2024. +[51] Martin Kutter, Sviatoslav V Voloshynovskiy, and Alexander Herrigel. Watermark copy attack. In Security and Watermarking of Multimedia Contents II, volume 3971, pages 371-380. SPIE, 2000. +[52] Vitaliy Kinakh, Brian Pulfer, Yury Belousov, Pierre Fernandez, Teddy Furon, and Slava Voloshynovskiy. Evaluation of security of ml-based watermarking: Copy and removal attacks. arXiv preprint arXiv:2409.18211, 2024. +[53] Guanlin Li, Yifei Chen, Jie Zhang, Jiwei Li, Shangwei Guo, and Tianwei Zhang. Warfare: Breaking the watermark protection of ai-generated content. arXiv e-prints, pages arXiv-2310, 2023. +[54] Google Deepmind. Imagen 2. https://deepmind.google/technologies/imagen-2/. +[55] Amazon. Amazon titan foundation models - generative ai. https://aws.amazon.com/cn/bedrock/amazon-models/titan/,. +[56] Amazon. Amazon titan image generator and watermark detection api are now available in amazon bedrock. https://aws.amazon.com/cn/blogs/aws/amazon-titan-image-generator-and-watermark-detection-api-are-now-available-in-amazon-bedrock/,.. + +# NeurIPS Paper Checklist + +# 1. Claims + +Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? + +Answer: [Yes] + +Justification: The main claims made in the abstract and introduction accurately reflect the paper's contributions and scope. + +Guidelines: + +- The answer NA means that the abstract and introduction do not include the claims made in the paper. +- The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. +- The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. +- It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. + +# 2. Limitations + +Question: Does the paper discuss the limitations of the work performed by the authors? + +Answer: [Yes] + +Justification: Please see Section G. + +Guidelines: + +- The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. +- The authors are encouraged to create a separate "Limitations" section in their paper. +- The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. +- The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. +- The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. +- The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. +- If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. +- While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. + +# 3. Theory assumptions and proofs + +Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? + +Answer: [NA] + +Justification: The paper does not include theoretical results. + +# Guidelines: + +- The answer NA means that the paper does not include theoretical results. +- All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced. +- All assumptions should be clearly stated or referenced in the statement of any theorems. +- The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. +- Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. +- Theorems and Lemmas that the proof relies upon should be properly referenced. + +# 4. Experimental result reproducibility + +Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? + +Answer: [Yes] + +Justification: The code will be available at the URL mentioned in the abstract. + +# Guidelines: + +- The answer NA means that the paper does not include experiments. +- If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. +- If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. +- Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. +- While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example +(a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. +(b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. +(c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). +(d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. + +# 5. Open access to data and code + +Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? + +# Answer: [Yes] + +Justification: The code will be available at the URL mentioned in the abstract. We use an open-source diffusion model and data, which are cited correctly in the main paper and the appendix. + +# Guidelines: + +- The answer NA means that paper does not include experiments requiring code. +- Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details. +- While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). +- The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details. +- The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. +- The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. +- At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). +- Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. + +# 6. Experimental setting/details + +Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? + +# Answer: [Yes] + +Justification: Please see Appendix F and Section 5.3. + +# Guidelines: + +- The answer NA means that the paper does not include experiments. +- The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. +- The full details can be provided either with the code, in appendix, or as supplemental material. + +# 7. Experiment statistical significance + +Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? + +# Answer: [No] + +Justification: Error bars are not presented because it would be too computationally expensive. + +# Guidelines: + +- The answer NA means that the paper does not include experiments. +- The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. +- The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). +- The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) +- The assumptions made should be given (e.g., Normally distributed errors). + +- It should be clear whether the error bar is the standard deviation or the standard error of the mean. +- It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified. +- For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). +- If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. + +# 8. Experiments compute resources + +Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? + +Answer: [Yes] + +Justification: Please see Appendix F. + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. +- The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. +- The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). + +# 9. Code of ethics + +Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines? + +Answer: [Yes] + +Justification: The research conducted in the paper conform with the NeurIPS Code of Ethics. + +Guidelines: + +- The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. +- If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. +- The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). + +# 10. Broader impacts + +Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? + +Answer: [Yes] + +Justification: Please see Appendix H. + +Guidelines: + +- The answer NA means that there is no societal impact of the work performed. +- If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. +- Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. + +- The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. +- The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. +- If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). + +# 11. Safeguards + +Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? + +Answer: [Yes] + +Justification: Please see Section 7. + +Guidelines: + +- The answer NA means that the paper poses no such risks. +- Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. +- Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. +- We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. + +# 12. Licenses for existing assets + +Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? + +Answer: [Yes] + +Justification: All datasets, codes and models we used are public and cited properly. + +Guidelines: + +- The answer NA means that the paper does not use existing assets. +- The authors should cite the original paper that produced the code package or dataset. +- The authors should state which version of the asset is used and, if possible, include a URL. +- The name of the license (e.g., CC-BY 4.0) should be included for each asset. +- For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. +- If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. +- For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. + +- If this information is not available online, the authors are encouraged to reach out to the asset's creators. + +# 13. New assets + +Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? + +Answer: [Yes] + +Justification: The code and data will be available at the URL mentioned in the abstract. + +Guidelines: + +- The answer NA means that the paper does not release new assets. +- Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. +- The paper should discuss whether and how consent was obtained from people whose asset is used. +- At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. + +# 14. Crowdsourcing and research with human subjects + +Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? + +Answer: [NA] + +Justification: The paper does not involve crowdsourcing nor research with human subjects. + +Guidelines: + +- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. +- Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. +- According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. + +# 15. Institutional review board (IRB) approvals or equivalent for research with human subjects + +Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? + +Answer: [NA] + +Justification: The paper does not involve crowdsourcing nor research with human subjects. + +Guidelines: + +- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. +- Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. +- We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. +- For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. + +# 16. Declaration of LLM usage + +Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required. + +# Answer: [NA] + +Justification: The LLM is used only for writing. + +# Guidelines: + +- The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components. +- Please refer to our LLM policy ( https://neurips.cc/Conferences/2025/LLM) for what should or should not be described. + +# A Algorithm + +# Algorithm 1 WMCopier + +Require: Clean image $x$ ; Noise predictor $\epsilon_{\theta}$ of pretrained diffusion model $\mathcal{M}_{\theta}$ ; Inversion steps $T_{S}$ ; Refinement iterations $L$ ; Low noise step $t_l$ for refinement; Step size $\eta$ ; Trade off coefficient $\lambda$ . + +Ensure: Forged watermarked image $x^{f}$ $x_{T_S} \gets \text{Inversion}(x, T_S) \# \text{Obtain noisy latent at step } T_S \text{ via DDIM inversion}$ $x_{T_S}' \gets x_{T_S} \# \text{Initial the start point of sampling}$ +for $t = T_S, T_S - 1, \ldots, 1$ do # DDIM sampling + $\epsilon_t \gets \epsilon_\theta(x'_t, t)$ $x_{t-1}' \gets \sqrt{\alpha_{t-1}} \cdot \left( \frac{x'_t - \sqrt{1 - \alpha_t} \cdot \epsilon_t}{\sqrt{\alpha_t}} \right) + \sqrt{1 - \alpha_{t-1}} \cdot \epsilon_t$ +end for + $x^f \gets x_0'$ +for $i = 1$ to $L$ do # Refinement +Sample $z \sim \mathcal{N}(0, \mathbf{I})$ $x_{t_l}^{f(i)} \gets \sqrt{\alpha_{t_l}} \cdot x^{f(i)} + \sqrt{1 - \alpha_{t_l}} \cdot z \# \text{Add noise to a low noise step } t_l$ $x^{f(i+1)} \gets x^{f(i)} + \eta \cdot \nabla_{x^{f(i)}}\left(-\frac{1}{\sqrt{1 - \alpha_{t_l}}} \cdot \epsilon_\theta(x_{t_l}^{f(i)}, t_l)\right) - \lambda \cdot \|x^{f(i)} - x\|^2$ +end for +return $x^{f} \gets x^{f(L)}$ + +# B Real-World Deployment + +In line with commitments made to the White House, leading U.S. AI companies that provide generative AI services are implementing watermarking systems to embed watermark information into model-generated content before it is delivered to users [3]. + +Google introduced SynthID [5], which adds invisible watermarks to both Imagen 3 and Imagen 2 [54]. Amazon has deployed invisible watermarks on its Titan image generator [4]. + +Meanwhile, OpenAI and Microsoft are transitioning from metadata-based watermarking to invisible methods. OpenAI points out that invisible watermarking techniques are superior to the visible genre and metadata methods previously used in DALL-E 2 and DALL-E 3 [6], due to their imperceptibility and robustness to common image manipulations, such as screenshots, compression, and cropping. Microsoft has announced plans to incorporate invisible watermarks into AI-generated images in Bing [7]. Table 6 summarizes watermarking systems deployed in text-to-image models. + +Table 6: Watermarking deployment across major Gen-AI service providers. + +
Service ProviderWatermarkGenerative ModelDeployedDetector
OpenAIInvisibleDALL·E 2 & DALL·E 3In ProgressUnknown
Google (SynthID)InvisibleImagen 2 &Imagen 3DeployedNot Public
MicrosoftInvisibleDALL·E 3 (Bing)In ProgressUnknown
AmazonInvisibleTitanDeployedPublic
+ +# C Watermark Schemes + +# C.1 Open-source Watermarking Schemes + +DWT-DCT. DWT-DCT [40] is a classical watermarking technique that embeds watermark bits into the frequency domain of the image. It first applies the discrete wavelet transform (DWT) to decompose the image into sub-bands and then performs the discrete cosine transform (DCT) on selected sub-bands. + +HiDDeN. HiDDeN [41] is a neural network-based watermarking framework using an encoder-decoder architecture. A watermark message is embedded into an image via a convolutional encoder, and a + +decoder is trained to recover the message. Additionally, a noise simulation layer is inserted between the encoder and decoder to encourage robustness. + +RivaGAN. RivaGAN embeds watermark messages into video or image frames using a GAN-based architecture. A generator network embeds the watermark into the input image, while a discriminator ensures visual quality. + +Stable Signature. As an in-processing watermarking technique, Stable Signature [26] couples the watermark message with the parameters of the stable diffusion model. It is an invisible watermarking method proposed by Meta AI, which embeds a unique binary signature into images generated by latent diffusion models (LDMs) through fine-tuning the model's decoder. + +Setup. In our experiments, all schemes are evaluated under their default configurations, including the default image resolutions (128×128 for HiDDeN, 256×256 for RivaGAN, and 512×512 for both Stable Signature and Amazon), as well as their default watermark lengths (32 bits for DWT-DCT and RivaGAN, 30 bits for HiDDeN, and 48 bits for Stable Signature). With regard to PSNR, we report both the original PSNR of these schemes and the PSNR of our forged samples in Table 7. + +Table 7: PSNR of watermarking schemes and our forged samples + +
SchemeDWT-DCTHiddeNRivaGANStable Signature
PSNR (Original)38.5031.8838.6131.83
PSNR (Ours)33.6931.7434.0731.29
+ +# C.2 Closed-Source Watermarking System + +Among the available options, Google does not open its watermark detection mechanisms to users, making it impossible to evaluate the success of our attack. In contrast, Amazon provides access to its watermark detection for the Titan model [55], allowing us to directly measure the performance of our attack. Therefore, we chose Amazon's watermarking scheme for our experiments. Amazon's watermarking scheme, referred to as Amazon WM, ensures that AI-generated content can be traced back to its source. The watermark detection API detect whether an image is generated by the Titan model and provides a confidence level for the detection3 This confidence level reflects the likelihood that the image contains a valid watermark, as illustrated in Figure 6. + +In our experiments, we generated 5,000 images from the Titan model using Amazon Bedrock [56]. Specifically, we used ten different prompts to generate images with the Titan model, which were then employed to carry out our attack. The examples of prompts we used are listed in Figure 7. In this attack, we embedded Amazon's watermark onto four datasets, each containing 100 images. Finally, we submitted the forged images to Amazon's watermark detection API. Additionally, we forged Amazon's watermark on images from non-public datasets, including human-captured photos and web-sourced images, all of which were flagged as Titan-generated. + +# Results + +To determine if an image was generated using a Titan Image Generator model, upload an image above and select analyze. + +![](images/6d51f14d9e29f8ddcfe2bc2ef62a8623889fedc51abc6594024acd29414aeaea.jpg) +Watermark detected (Confidence: High) +Figure 6: Result from Amazon's watermark detection API. + +Bedrock detected a watermark generated by the Titan Image Generator model on this image. + +1. A serene landscape of a misty forest at sunrise, with golden light filtering through the trees and a calm river flowing in the foreground, ultra-realistic and soft lighting. +2. A futuristic cityscape at night, with glowing neon lights reflecting on wet streets, flying cars and towering skyscrapers, cyberpunk style, highly detailed. +3. A majestic lion standing proudly on a cliff at sunset, with a dramatic orange sky and rolling hills in the background, hyper-realistic, high detail fur texture. +4. An abstract painting of swirling vibrant colors, reminiscent of Van Gogh's 'Starry Night', using bold brushstrokes and a mix of blue, yellow, and white. +5. A beautiful, tranquil Japanese garden with a koi pond, cherry blossom trees in full bloom, and a traditional tea house, soft sunlight filtering through the branches. +6. A fantasy scene of a dragon flying over a medieval castle, with smoke rising from its nostrils and a stormy sky in the background, highly detailed, dark fantasy style. +7. A close-up of a dew-covered spiderweb in the morning, with sunlight sparkling on the droplets, extremely detailed, sharp focus on the texture and reflection. +8. A peaceful 1920s Parisian street view, featuring cozy outdoor cafes, charming cobblestone pathways, and vintage buildings with intricate architecture. +9. An astronaut standing on the surface of Mars, gazing at the Earth in the distance, with red rocky terrain and a clear blue sky, photorealistic, high contrast. +10.A magical winter wonderland with snow-covered trees, a frozen lake reflecting the pale blue sky, and soft sunlight peeking through the branches, ultra-realistic and serene. + +·· + +Figure 7: Example prompts used for image generation with the Titan model. + +# D External Experiment Results + +# D.1 Further Analysis of DWT-DCT Attack Results + +We observed that DWT-DCT suffers from low bit-accuracy on certain images, which leads to unreliable watermark detection and verification. To reflect a more practical scenario, we assume that the service provider only returns images with high bit accuracy to users to ensure traceability. Specifically, we select 5,000 images with $100\%$ bit accuracy to construct our auxiliary dataset $\mathcal{D}_{aux}$ . We then apply both the original DWTDCT scheme and our attack to add watermarks to clean images from four datasets. As shown in Table 8, our method achieves even higher bit-accuracy than the original watermarking process itself. + +Table 8: Comparison of bit accuracy between original DWT-DCT and DWT-DCT (Ours). + +
DatasetDWTDCT-OriginalDWTDCT-WMCopier
Bit-acc.↑FPR@10-6↑Bit-acc.↑FPR@10-6↑
MS-COCO82.15%56.60%89.19%60.20%
CelebA-HQ84.70%54.70%89.46%53.20%
ImageNet85.37%55.30%88.25%55.80%
DiffusionDB82.42%52.90%85.17%54.30%
+ +# D.2 Semantic Watermark + +Semantic watermarking [49, 50] embeds watermark information that is intrinsically tied to the semantic content of the image. To further investigate the effectiveness of our attack on semantic watermarking, we compare it with the forgery attack proposed by Müller et al. [14], which is specifically designed for semantic watermark schemes. We adopt Treering [49] as the target watermark. As shown in Table 9, both our method and Müller's achieve a $100\%$ false positive rate (FPR) under Treering's default threshold of 0.01. However, our method produces significantly higher forgery quality, with an average PSNR over 30 dB, compared to around 26 dB for Müller's. + +We also evaluate Müller's method on a non-semantic watermark, Stable Signature. As summarized in Table 10, Müller's approach fails to attack this type of watermark, while our method maintains a high success rate. + +Table 9: Comparison with Müller et al. [14] and our attack on Treering. + +
DatasetMüller et al. [14]Ours
PSNR↑FPR@0.01↑PSNR↑FPR@0.01↑
MS-COCO26.14100.00%32.72100.00%
CelebA-HQ25.22100.00%31.52100.00%
ImageNet26.82100.00%32.99100.00%
DiffusionDB25.19100.00%32.78100.00%
+ +Table 10: Comparison with Müller et al. [14] and our attack on Stable Signature. + +
DatasetMüller et al. [14]Ours
PSNR↑Forged Bit-Acc.↑FPR@10-6↑PSNR↑Forged Bit-Acc.↑FPR@10-6↑
MS-COCO25.6645.70%0.00%31.2998.04%94.60%
CelebA-HQ24.7351.23%0.00%30.5496.04%100.00%
ImageNet25.9147.71%0.00%31.3397.03%98.60%
DiffusionDB26.1248.45%0.00%31.5996.24%96.60%
+ +# D.3 Discrimination of Forged Watermarks by Robustness Gap + +While the robustness gap between genuine and forged watermarks offers a promising direction for detecting forged samples, we find it is insufficient for reliable discrimination. This limitation becomes particularly evident when genuine samples have already been subjected to mild distortions. + +In discrimination, samples are classified as forgeries if their bit accuracy falls below a predefined threshold $\kappa$ after applying a single perturbation. Specifically, we apply perturbation $A$ to both genuine and forged watermark images and then distinguish them based on their bit accuracy. However, considering the inherent robustness of the watermarking scheme itself, when genuine watermarked images have already undergone slight perturbation $B$ , the bit accuracy values of genuine and forged samples become indistinguishable. For distortion $A$ , we use Gaussian noise with $\sigma = 0.05$ , while for distortion $B$ , Gaussian noise with $\sigma = 0.02$ is applied. The ROC curve and the bit-accuracy distribution for this case are shown in Figure 8. + +![](images/ff03eaf090af03015f5ddcf35061d6b8bd9356bf910cb7535d52eeb0cb1e969a.jpg) +Figure 8: ROC curve and bit accuracy distribution (KDE) for genuine and forged watermark samples under Gaussian noise. + +![](images/d14a234e7cf9b92d67d49cd2ea8086794c673d55c39ce1d58c7729bc19233049.jpg) + +# E Additional Ablation Studies + +Table 11 shows that the proposed refinement step substantially improves visual fidelity, as measured by PSNR, while simultaneously enhancing forgery performance (forged-bit accuracy). + +We also explore the impact of varying the size of $D_{\mathrm{aux}}$ . Specifically, we use 1,000, 5,000, and 10,000 collected RivaGAN watermarked images. As shown in Table 12, larger $D_{\mathrm{aux}}$ generally yields higher forged-bit accuracy and higher FPR across datasets. However, the improvement becomes marginal once the size of $D_{\mathrm{aux}}$ reaches around 5,000, indicating that the attack performance saturates beyond this point. + +Table 11: Impact of refinement on forgery performance. + +
Watermark SchemePSNR ↑Forged Bit-acc. ↑FPR@10-6↑
W/o Ref.W/ Ref.W/o Ref.W/ Ref.W/o Ref.W/ Ref.
DWT-DCT32.4033.7763.03%89.62%16.00%57.00%
HiddeN29.8132.7980.60%99.40%89.00%94.00%
RivaGAN31.8934.0389.90%95.90%84.00%96.00%
StableSignature25.6031.2797.58%98.19%91.00%98.00%
+ +Table 12: Performance comparison across datasets with different sizes of ${D}_{aux}$ + +
Dataset1000500010000
PSNR↑Forged Bit-acc.↑FPR@10-6↑PSNR↑Forged Bit-acc.↑FPR@10-6↑PSNR↑Forged Bit-acc.↑FPR@10-6↑
MS-COCO34.1681.82%80.70%34.0795.74%96.40%34.4797.81%96.30%
CelebA-HQ35.7489.10%89.50%35.2898.61%99.10%35.2598.63%98.50%
ImageNet34.1081.25%71.50%33.8793.83%94.90%34.2993.53%95.80%
DiffusionDB34.7774.76%64.10%34.5090.43%91.20%34.9691.70%93.60%
+ +# F Training Details of the Diffusion Model + +We adopt a standard DDIM framework for training, following the official Hugging Face tutorial4. The model is trained for 20,000 iterations with a batch size of 256 and a learning rate of $1 \times 10^{-4}$ . The entire training process takes roughly 40 A100 GPU hours. To support different watermarking schemes, we only adjust the input resolution of the model to match the input dimensions for each watermark. Other training settings and model configurations remain unchanged. Although the current training setup suffices for watermark forgery, enhancing the model's ability to better capture the watermark signal is left for future work. For our primary experiments, we train an unconditional diffusion model from scratch using 5,000 watermarked images. Due to the limited amount of training data, the diffusion model demonstrates memorization [18], resulting in reduced sample diversity, as illustrated in Figure 9. All of the experiments are conducted on an NVIDIA A100 GPU. + +# G Limitation + +In this section, we discuss the limitations of our attack. While our current training paradigm already achieves effective watermark forgery, we have not yet systematically explored how to guide diffusion models better to capture the underlying watermark distribution. In this work, we employ a standard diffusion architecture without any specialized training strategies. We leave the exploration of alternative architectures and training schemes to future work. Moreover, understanding why different watermark types exhibit varying forgery and learning behaviors remains an open problem. Additionally, our method requires a substantial amount of data and incurs training costs. + +# H Broader Impact + +Invisible watermarking plays a critical role in detecting and holding accountable AI-generated content, making it a solution of significant societal importance. Our research introduces a novel + +watermark forgery attack, revealing the vulnerabilities of current watermarking schemes to such attacks. Although our work involves the watermarking system deployed by Amazon, as responsible researchers, we have worked closely with Amazon's Responsible AI team to develop a solution, which has now been deployed. The Amazon Responsible AI team has issued the following statement: + +'On March 28, 2025, we released an update that improves the watermark detection robustness of our image generation foundation models (Titan Image Generator and Amazon Nova Canvas). With this change, we have maintained our existing watermark detection accuracy. No customer action is required. We appreciate the researchers from the State Key Laboratory of Blockchain and Data Security at Zhejiang University for reporting this issue and collaborating with us.' + +While our study highlights the potential risks of existing watermarking systems, we believe it plays a positive role in the early stages of their deployment. By providing valuable insights for improving current technologies, our work contributes to enhancing the security and robustness of watermarking systems, ultimately fostering more reliable solutions with a positive societal impact. + +![](images/3f781db502af3da8ab16fbf8e79c2c2932a585304c0fe074fbf955f15d713ad7.jpg) +Figure 9: Generated images from diffusion models trained on 5,000 watermarked images + +# I Forged Samples + +![](images/65acd98ce4334b01fd9cb37de9adaf661e023da200ea6cc8134a98a1e6828c2d.jpg) +Figure 10: Examples of forged Amazon watermark samples on the DiffusionDB + +![](images/035d2312a74ee3ba42fa086bfe21662238d8641c081744493f31faa3159f4108.jpg) +Figure 11: Examples of forged Amazon watermark samples on the MS-COCO + +![](images/57c46d32626b7bc2d055f0dd5e3aa3659c64357a25353a6815383c2ba843742a.jpg) +Figure 12: Examples of forged Amazon watermark samples on the CelebA-HQ + +![](images/62aedef55880162b97cefaa6de660b96b188ecffd6c4d03f49fcfef22e5d93c2.jpg) +Figure 13: Examples of forged Amazon watermark samples on the ImageNet \ No newline at end of file diff --git a/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/images.zip b/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0fe57a805e27a806c3c8be2c30fe2f48aff1b339 --- /dev/null +++ b/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:822d351a242d40c5d6b9f53587ae530afae6b757ae139924ff33de7f436955ff +size 2452650 diff --git a/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/layout.json b/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..322b9a4b72608d5d9aaac7462ef33385c810f44d --- /dev/null +++ b/NeurIPS/2025/WMCopier_ Forging Invisible Watermarks on Arbitrary Images/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:206e9597c1662e6a35799a333e8dfae71b5587f19acc765982660eeb6d282072 +size 855106 diff --git a/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/b6141afb-6183-4c95-9870-399c132ba26a_content_list.json b/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/b6141afb-6183-4c95-9870-399c132ba26a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..addc4acf843ca207ca3481b464d61b5cc647bde1 --- /dev/null +++ b/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/b6141afb-6183-4c95-9870-399c132ba26a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ffd41259034223e3d98b32f26b3144b603703dce1fd6940407a551159c75215 +size 151381 diff --git a/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/b6141afb-6183-4c95-9870-399c132ba26a_model.json b/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/b6141afb-6183-4c95-9870-399c132ba26a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d62f750754dcd48d7e6222a2129623fec7856f4a --- /dev/null +++ b/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/b6141afb-6183-4c95-9870-399c132ba26a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e38450a3b5ac2941882d23a43953784aa7f673f6562b6e89d9f0b25276752c4 +size 191097 diff --git a/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/b6141afb-6183-4c95-9870-399c132ba26a_origin.pdf b/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/b6141afb-6183-4c95-9870-399c132ba26a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1796aeaa55f0d2508a0f460db3968c51f722f35e --- /dev/null +++ b/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/b6141afb-6183-4c95-9870-399c132ba26a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9f32d2488df2161384557c577f5151b3b681995f6f41032e0970f3fd73c3648 +size 21713358 diff --git a/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/full.md b/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a0cc59cc972464e361c9eace459a4397bdf92f0d --- /dev/null +++ b/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/full.md @@ -0,0 +1,800 @@ +# WaLRUS: Wavelets for Long-range Representation Using SSMs + +Hossein Babaei Mel White Sina Alemohammad Richard G. Baraniuk + +Department of Electrical and Computer Engineering, Rice University {hb26,mel.white,sa86,richb} $@$ rice.edu + +# Abstract + +State-Space Models (SSMs) have proven to be powerful tools for modeling long-range dependencies in sequential data. While the recent method known as HiPPO has demonstrated strong performance, and formed the basis for machine learning models S4 and Mamba, it remains limited by its reliance on closed-form solutions for a few specific, well-behaved bases. The SaFARi framework generalized this approach, enabling the construction of SSMs from arbitrary frames, including non-orthogonal and redundant ones, thus allowing an infinite diversity of possible "species" within the SSM family. In this paper, we introduce WaLRUS (Wavelets for Long-range Representation Using SSMs). We compare WaLRUS to HiPPO-based models, and demonstrate improved accuracy and more efficient implementations for online function approximation tasks. + +# 1 Introduction + +Sequential data is foundational to many machine learning tasks, including natural language processing, speech recognition, and video understanding [1-3]. These applications require models that can effectively process and retain information over long time horizons. A central challenge in this setting is the efficient representation of long-range dependencies in a way that preserves essential features of the input signal for downstream tasks, while remaining computationally tractable during both training and inference [4]. + +Recurrent neural networks (RNNs) are traditional choices for modeling sequential data, but struggle with long-term dependencies due to vanishing or exploding gradients during backpropagation through time [4-6]. While gated variants like LSTMs [7] and GRUs [8] mitigate some issues, they require significant tuning and lack compatibility with parallel processing, hindering scalability. + +State-space models (SSMs) offer a linear and principled framework for encoding temporal information, and have re-emerged as a powerful alternative for online representation of sequential data [9-16]. By design, they enable the online computation of compressive representations that summarize the entire input history using a fixed-size state vector, ensuring a constant memory footprint regardless of sequence length. A major breakthrough came with HiPPO (High-order Polynomial Projection Operators), which reformulates online representation as a function approximation problem using orthogonal polynomial bases [9]. This approach underpins state-of-the-art models like S4 and Mamba, enabling compact representations for long-range dependencies [10, 11]. + +However, existing SSMs primarily rely on Legendre and Fourier bases, which, although effective for smooth or periodic signals, struggle with non-stationary and localized features [9, 10]. These challenges are especially evident in domains such as audio, geophysics, and biomedical signal processing, where rapid transitions and sparse structure are common. + +To address this limitation, the SaFARi framework (State-Space Models for Frame-Agnostic Representation) extends HiPPO to arbitrary frames, including non-orthogonal and redundant bases [13, 14, 17]. + +![](images/87c2926531ee1503b6170a1892a8472b140aec86b6ca2042ad38be14d5633714.jpg) +Figure 1: An input signal comprising three random spikes is sequentially processed by SSMs and reconstructed after observing the entire input. Only the wavelet-based SSM constructed using WaLRUS can clearly distinguish adjacent spikes. + +This generalization enables SSM construction from any frame via numerical solutions of first-order linear differential equations, preserving HiPPO's memory efficiency and update capabilities without closed-form restrictions. + +In this paper, we leverage the SaFARi method with wavelet frames to introduce a new model, WaLRUS (Wavelets for Long-range Representation Using SSMs). We derive our model using Daubechies wavelets with two variants: scaled-WaLRUS and translated-WaLRUS, designed for capturing non-smooth and localized features through compactly supported, multi-resolution wavelet decompositions [18]. These properties allow WaLRUS to retain fine-grained signal details typically lost by polynomial-based models. + +We also provide a comparative analysis of WaLRUS and existing HiPPO variants (see Fig. 1). Empirical results demonstrate that the wavelet-based WaLRUS model consistently outperforms Legendre and Fourier-based HiPPO models in reconstruction accuracy, especially on signals with sharp transients. Furthermore, WaLRUS has been experimentally observed to be stably diagonalizable, which is the key enabler of efficient convolution-based implementations and parallel computation [13, 14]. + +These results highlight the practical advantages of WaLRUS models, particularly in scenarios where signal structure varies across time and scale. By bridging multiscale signal analysis and online function approximation, WaLRUS opens new directions for modeling complex temporal phenomena across disciplines. + +# 2 Background + +Recent advances in machine learning, computer vision, and large language models have pushed the frontier of learning from long sequences of data. These applications demand models that can (1) generate compact representations of input streams, (2) preserve long-range dependencies, and (3) support efficient online updates. + +Classical linear methods, such as the Fourier transform, offer compact representations in the frequency domain [19-23]. However, they are ill-suited for online processing: each new input requires recomputing the entire representation, making them inefficient for streaming data and limited in their memory horizon. Nonlinear models like recurrent neural networks (RNNs) and their gated variants (LSTMs, GRUs) have been more successful in sequence modeling, but they face well-known issues such as vanishing/exploding gradients and limited parallelization [4-6, 8]. Moreover, their representations are task-specific, and not easily repurposed across different settings. + +To resolve these issues, the HiPPO framework [9] casts online function approximation as a continuous projection of the input $u(t)$ onto a linear combination of the given basis functions $\mathcal{G}$ . At every time $T$ , it produces a compressed state vector $\vec{c}(T)$ that satisfies the update rule: + +$$ +\frac {d}{d T} \vec {c} (T) = - A _ {(T)} \vec {c} (T) + B _ {(T)} u (T). \tag {1} +$$ + +Here, $A_{(T)}$ and $B_{(T)}$ are derived based on the choice of polynomial basis and measure $\mu(t)$ , which defines how recent history is weighted. Two commonly used measures are: + +$$ +\mu_ {t r} (t) = \frac {1}{\theta} \mathbb {1} _ {t \in [ T - \theta , T ]}, \quad \mu_ {s c} (t) = \frac {1}{T} \mathbb {1} _ {t \in [ 0, T ]}. \tag {2} +$$ + +The translated measure $\mu_{tr}$ emphasizes recent history within a sliding window of length $\theta$ , while the scaled measure $\mu_{sc}$ compresses the entire input history into a fixed-length representation. + +Despite its strengths, HiPPO is restricted to only a few bases (e.g., Legendre, Fourier), and deriving $A(t)$ and $B(t)$ in closed form is only tractable for specific basis-measure combinations. + +SaFARi addressed this limitation by generalizing online function approximation to any arbitrary frame [17]. A frame $\Phi(t)$ is a set of elements $\{\phi_i(t)\}$ such that one can reconstruct any input $g(t)$ by knowing the inner products $\langle g(t), \phi_i(t) \rangle$ . For a given frame $\Phi$ , its complex conjugate $\overline{\Phi}$ , and its dual $\widetilde{\Phi}$ , the scaled-SaFARi produces an SSM with $A$ and $B$ given by: + +$$ +\frac {\partial}{\partial T} \vec {c} (T) = - \frac {1}{T} A \vec {c} (T) + \frac {1}{T} B u (T), \quad A _ {i, j} = \delta_ {i, j} + \int_ {0} ^ {1} t ^ {\prime} \left. \frac {\partial}{\partial t} \overline {{\phi}} _ {i} \right| _ {t = t ^ {\prime}} \widetilde {\phi} _ {j} \left(t ^ {\prime}\right) d t ^ {\prime}, \quad B _ {i} = \overline {{\phi}} _ {i} (1) \tag {3} +$$ + +while the translated-SaFARi produces an SSM with the $A$ and $B$ given by: + +$$ +\frac {\partial}{\partial T} \vec {c} (T) = - \frac {1}{\theta} A \vec {c} (T) + \frac {1}{\theta} B u (T), \quad A _ {i, j} = \bar {\phi} _ {i} (0) \tilde {\phi} _ {j} (0) + \int_ {0} ^ {1} \frac {\partial}{\partial t} \bar {\phi} _ {i} \Bigg | _ {t = t ^ {\prime}} \tilde {\phi} _ {j} \left(t ^ {\prime}\right) d t ^ {\prime}, B _ {i} = \bar {\phi} _ {i} (1) \tag {4} +$$ + +In the appendix, we provide a some theoretical background on Eq. 3 and Eq. 4 from [17]. + +Incremental update of SSMs: The differential equation in Eq. 1 can be solved incrementally. Following [9], we adopt the Generalized Bilinear Transform (GBT) [24] given by Eq. 5 for its superior numerical accuracy in first order SSMs. + +$$ +c (t + \Delta t) = \left(I + \delta t \alpha A _ {t + \delta t}\right) ^ {- 1} \left[ \left(I - \delta t (1 - \alpha) A _ {t}\right) c (t) + \delta t B (t) u (t) \right] \tag {5} +$$ + +Diagonalization of $A$ : Each GBT step involves matrix inversion and multiplication. If $A(t)$ has time-independent eigenvectors (e.g., $A(t) = g(t)A$ ), it can be diagonalized as $A(t) = V\Lambda(t)V^{-1}$ , allowing a change of variables $\widetilde{c} = V^{-1}c$ and $\widetilde{B} = V^{-1}B(t)$ , yielding: + +$$ +\frac {\partial}{\partial t} \widetilde {c} = - \Lambda (t) \widetilde {c} + \widetilde {B} u (t), \tag {6} +$$ + +This reduces each update to elementwise operations, significantly lowering computational cost. + +# 2.1 Wavelet Frames + +Wavelet frames offer a multiresolution analysis that captures both temporal and frequency characteristics of signals, making them particularly effective for representing non-stationary or long-range dependent data [25]. Initiated by [26] and formalized by [27], wavelet theory gained prominence with Ingrid Daubechies' seminal work [28], which introduced compactly supported orthogonal wavelets. Since then, wavelets have played a central role in modern signal processing [29]. + +Wavelet analysis decomposes a signal $f(t)$ into dilations and translations of a mother wavelet $\psi(t)$ , enabling simultaneous localization in time and frequency. The discrete wavelet transform is + +$$ +W (j, k) = \int_ {- \infty} ^ {\infty} f (t) \psi_ {j, k} ^ {*} (t) d t, \quad \psi_ {j, k} (t) = \frac {1}{\sqrt {2 ^ {- j}}} \psi \left(\frac {t - k}{2 ^ {- j}}\right). +$$ + +Unlike global bases such as Fourier or polynomials, which struggle with localized discontinuities, wavelets provide sparse representations of signals with singularities, such as jumps or spikes [18, 30]. Their local support yields small coefficients in smooth regions and large coefficients near singularities, enabling efficient compression and accurate reconstruction. These properties make wavelet frames a natural and powerful choice for time-frequency analysis in a wide range of practical applications. + +![](images/16d7e38b6091cf3b9aaadd4fc8de52bf7269bb4cb212d7134f9167bd3c01c154.jpg) +Figure 2: A diagram of the relationships between HiPPO, SaFARi, WaLRUS (this work), and SSM-based models such as S4 and Mamba. The focus of this work is on the development of a wavelet-based SSM in a function approximation task, which could later be used as a drop-in replacement for the SSM layer in a learned model. + +![](images/c105889c3730afe65076618496a369536119b37dc10ad9fb9b110f5a32be02c2.jpg) +Figure 3: Left: Elements of a Daubechies-22 wavelet frame, with father wavelet $\phi$ , mother wavelet $\psi$ , and two scales. Right: The scaled and translated $A$ matrices for WaLRUS with $N = 21$ . + +![](images/8942678cf0ff0e7ab2b70bf9931446c5c1ce35ba3dd338191f98d73a0f8f0f1e.jpg) + +![](images/6d2b8624bfeb8af27c63b060c29bb457a628114901659ac8bbc325b67655d929.jpg) + +# 3 WaLRUS: Wavelet-based SSMs + +Daubechies wavelets [18, 28] provide a particularly useful implementation of a SaFARi SSM. While there are different types of commonly used wavelets, Daubechies wavelets are of particular interest in signal representation due to their maximal vanishing moments over compact support. + +To construct the frame, we use the usual dyadic scaling for multiresolution analysis; that is, scaling the mother wavelets by a factor of two at each level. For each scale, different shifts along the x-axis are introduced. Compressive wavelet frames are truncated versions of wavelet frames that contain only a few of the coarser scales, and introduce overlapping shifts to keep the expressivity and satisfy the frame condition (See Mallat, [29]). The interplay between the retained scales and the minimum required overlap to maintain the expressivity is extensively studied in the wavelet literature [18, 28, 29]. If there is excess overlap in shifts, the wavelet frame becomes redundant, and redundancy has advantages in expressivity and robustness to noise. + +Figure 3, left, gives a visual representation of how we construct such a frame. The frame consists of shifted copies of the father wavelet $\phi$ at one scale, and shifted copies of a mother wavelet $\psi$ at different scales, with overlaps that introduce redundancy. Figure. 3, right, shows the resulting $A$ matrices for the scaled and translated WaLRUS. $^{1}$ + +Some recent works [31, 32] has conceptually connected the use of wavelets and SSM-based models (namely Mamba). These efforts are fundamentally distinct from ours in that they perform a multiresolution analysis on the input to the model only. No change is made to the standard Mamba SSM layer. + +This work, on the other hand, is the first to challenge the ubiquity of the Legendre-based SSM, and present alternative wavelet-based machinery for the core of powerful models like Mamba. WaLRUS could be used as a drop-in replacement for any existing SSM-based framework. However, before simply substituting a part in a larger system, we must first justify how and why a different SSM can improve performance. This paper presents a tool that stands alone as an online function approximator, and also provides a foundational building block for future integration in SSM-based models. + +# 3.1 Redundancy of the wavelet frame and size of the SSM + +In contrast to orthonormal bases, redundant frames allow more than one way to represent the same signal. This redundancy arises from the non-trivial null space of the associated frame operator, meaning that multiple coefficient vectors can yield the same reconstructed function. Although the representation is not unique, it is still perfectly valid, and this flexibility offers several key advantages in signal processing. In particular, redundancy can improve robustness to noise, enable better sparsity for certain signal classes, and enhance numerical stability in inverse problems [33-35]. + +We distinguish between the total number of frame elements $N_{\mathrm{full}}$ and the effective dimensionality $N_{\mathrm{eff}}$ of the subspace where the meaningful representations reside. In other words, while the frame may consist of $N_{\mathrm{full}}$ vectors, the actual information content lies in a lower-dimensional subspace of size $N_{\mathrm{eff}}$ . This effective dimensionality can be quantified by analyzing the singular-value spectrum of the frame operator [29, 33]. + +For the WaLRUS SSMs described in this work, we first derive $A_{N_{\mathrm{full}}}$ using all elements of the redundant frame. We then diagonalize $A$ and reduce it to a size of $N_{\mathrm{eff}}$ . This ensures that different frame choices, whether orthonormal or redundant, can be fairly and meaningfully compared in terms of computational cost, memory usage, and approximation accuracy. The exact relationship between the wavelet frame and the resulting $N_{\mathrm{eff}}$ of the $A$ matrix depends not only on the overlap of the shifts in the frame, but also on the type (and order) of chosen wavelet, and number of scales. Determining the "optimal" overlap or $N_{\mathrm{eff}}$ is application-specific and an area for future research. + +# 3.2 Computational complexity of WaLRUS + +For a sequence of length $L$ , scaled-SaFARi has $O(N^3 L)$ complexity due to solving an $N$ -dimensional linear system at each step, while translated-SaFARi can reuse matrix inverses, and thus has $O(N^2 L)$ complexity, assuming no diagonalization [17]. When the state matrix $A$ is diagonalizable, the complexity reduces to $O(NL)$ and can further accelerate to $O(L)$ with parallel processing on independent scalar SSMs. + +We observe that each of the scaled and translated WaLRUS SSMs we implemented, regardless of dimension, were stably diagonalizable. Further research is required to determine whether Daubechies wavelets will always yield diagonalizable SSMs. Legendre-based SSMs, on the other hand, are not stably diagonalizable [9]. Although [9] proposed a fast sequential HiPPO-LegS update to achieve $O(NL)$ complexity, [17] showed that it cannot be parallelized to $O(L)$ . Moreover, no efficient sequential update exists for HiPPO-LegT, leaving Legendre-based SSMs at a disadvantage during inference when sequential updates are needed. + +As sequence length increases, step-wise updates become a bottleneck, especially during training when the entire sequence is available upfront. This can be mitigated by using convolution kernels instead of sequential updates. Precomputing the convolution kernel and applying it via convolution accelerates computation, leveraging GPU-based parallelism to achieve $O(\log L)$ run-time complexity for diagonalizable SSMs. This optimization is feasible for both WaLRUS and Fourier-based SSMs. Although Legendre-based SSMs can attain similar asymptotic complexity through structured algorithms [10, 12], their nondiagonal nature prevents decoupling into $N$ independent SSMs. + +# 3.3 Representation errors in the translated WaLRUS + +Truncated representations in SSMs inevitably introduce errors, as discarding higher-order components limits reconstruction fidelity [17]. SaFARi only investigated these errors for scaled SSMs, leaving their approximation accuracy unquantified. Visualizing the convolution kernels generated by different SSMs offers some insight into the varying performance of different SSMs on the function approximation task. An "ideal" kernel would include a faithful representation for each element of the basis or frame from $T = 0$ to $T = W$ , where $W$ is the window width, and it would contain no non-zero elements between $W$ and $L$ . However, certain bases generate kernels with warping issues, as illustrated in Fig. 4. + +The HiPPO-LegT kernel loses coefficients due to warping within the desired translating window (see areas B and C of Fig. 4). For higher degrees of Legendre polynomials, the kernel exhibits an all-zero region at the beginning and end of the sliding window. This implies that high-frequency information in the input is not captured at the start or end of the sliding window, and the extent of this dead zone + +![](images/15272d301672268502b658ed63242cfa562c7e622af2b61457726a75843549b9.jpg) +Figure 4: The kernel generated by HiPPO-LegT with window size $W = 2000$ and representation size $N = 500$ . Three key non-ideal aspects of the kernel are noticeable. A) poor localization due to substantial non-zero values outside $W$ , B) coefficient loss from at bottom left of the kernel, and C) coefficient loss at the bottom right of the kernel for $t \in (1500, 2000)$ . + +![](images/aaa9f5bc60d1d6df57822df5bf1f5c2e707bc68d569863e3168dd8edbee9e08c.jpg) +Figure 5: Left: The ideal kernels, which yield zero representation error, are shown for Translated-WaLRUS (using the D22 wavelet), HiPPO-LegT, and HiPPO-FouT. Right: The corresponding kernels generated by the translated models are presented for comparison. WaveT has superior localization within the window of interest compared to HiPPO-LegT and HiPPO-FouT. + +increases with higher frequencies. The translated Fourier kernel primarily suffers from the opposite problem: substantial nonzero elements outside the kernel window indicate that LegT struggles to effectively "forget" historical input values. Thus contributions from input signals outside the sliding window appear as representation errors. LegT also has this problem, to a lesser extent—see area A of Fig. 4 for a closer view of the kernel. + +A visual inspection of Fig. 5 reveals that the translated-WaLRUS kernel closely matches the idealized version, whereas both FouT and LegT exhibit significant errors in their computed kernels. We emphasize that the issues observed with LegT and FouT arise from inherent limitations of the underlying SSMs themselves and are not due to the choice of input signal classes. + +# 4 Experiments + +The following section deploys the WaLRUS SSM on synthetic and real signals for the task of function approximation, comparing its performance with extant models in the literature. We will evaluate performance in MSE as well as their ability to track important signal features like singularities, and show that using WaLRUS can have an edge over the state-of-the-art polynomial-based SSMs. + +To benchmark WaLRUS against state-of-the-art SSMs, we implement two variants: Scaled-WaLRUS and Translated-WaLRUS, which we will call WaveS and WaveT respectively, following HiPPO's convention. These models are compared against the top-performing HiPPO-based SSMs. Further details on the wavelet frames used in each experiment are provided in Appendix A.2.4, and code can be found at https://github.com/echbaba/walrus. + +We conduct experiments on the following datasets: + +![](images/cdb33d9e37e40eb951242d2c7bbb7272f08384e4df85f757f22becd7b910334f.jpg) +Figure 6: Comparing reconstruction MSE between WaveS, LegS, and FouS. Error bars represent the first and third quantile of MSE. WaveS produces the lowest MSE in each dataset. + +
DatasetLegSFouSWaveS
M40%0.47%99.53%
Speech4.25%0%95.75%
Blocks0%0%100%
Spikes0%0%100%
Bumps0%0%100%
Piecepoly1.00%0%99.00%
+ +Table 1: Percent of tests where each basis had the lowest overall MSE. + +M4 Forecasting Competition [36]: A diverse collection of univariate time series with varying sampling frequencies taken from domains such as demographic, finance, industry, macro, micro, etc. + +Speech Commands [37]: A dataset of one-second audio clips featuring spoken English words from a small vocabulary, designed for benchmarking lightweight audio recognition models. + +Wavelet Benchmark Collection [38]: A synthetic benchmark featuring signals with distinct singularity structures, such as Bumps, Blocks, Spikes, and Piecewise Polynomials. We generate randomized examples from each class, with further details and visualizations provided in Appendix A.2.2. + +# 4.1 Comparisons among frames + +We note that no frame is universally optimal for all input classes, as different classes of input signals exhibit varying decay rates in representation error. However, due to the superior localization and near-optimal error decay rate of wavelet frames, wavelet-based SSMs consistently show an advantage over Legendre and Fourier-based SSMs across a range of real-world and synthetic signals. These experiments position WaLRUS as a powerful and adaptable approach for scalable, high-fidelity signal representation. + +# 4.1.1 Experimental setup + +The performance of SSMs in online function approximation can be evaluated several ways. One metric is the mean squared error (MSE) of the reconstructed signal compared to the original. In the following sections, we compare the overall MSE for SSMs with a scaled measure, and the running MSE for SSMs with a translated measure. + +Additionally, in some applications, the ability to capture specific features of a signal may be of greater interest than the overall MSE. As an extreme case, consider a signal that is nearly always zero, but contains a few isolated spikes. If our estimated signal is all zero, then the MSE will be small, but all of the information of interest has been lost. + +In all the experiments, we use equal SSM sizes $N_{\mathrm{eff}}$ , as described in Sec. 3.1. + +# 4.1.2 Function approximation with the scaled measure + +In this experiment, we construct Scaled-WaLRUS, HiPPO-LegS, and HiPPO-FouS with equal effective sizes (see Appendix A.2.4). Frame sizes are empirically selected to balance computational cost and approximation error across datasets. + +Fig. 6 shows the average MSE across random instances of multiple datasets. Not only is the average MSE lowest for WaLRUS for all datasets, but even where there is high variance in the MSE, all methods tend to keep the same relative performance. That is, the overlap in the error bars in Fig. 6 does not imply that the methods are indistinguishable; rather, for a given instance of a dataset, the MSE across all three SSM types tends to shift together, maintaining the MSE ordering WaveS < + +
Dataset: Basis/Frame:SpikesBumps
LegendreFourierWaveletsLegendreFourierWavelets
ScaledPeaks missed2.5%0.62%0%0.29%0.30%0%
False peaks1.6%1.6%0.01%0.3%1.9%0%
Instance-wise wins76%92.9%100%97.1%96.9%100%
Relative amplitude error16.2%11.8%5.5%12.4%16.2%6.5%
Average displacement18.832.010.012.733.77.1
TranslatedPeaks missed6.4%13.0%0.27%1.12%29.76%0.08%
False peaks1.1%0.05%0.22%0.43%0.28%0.20%
Instance-wise wins36.9%13.65%99.95%85.1%0.2%100%
Relative amplitude error19.6%28.4%3.5%6.9%28.4%2.5%
Average displacement6.05.44.35.55.84.8
+ +Table 2: Performance comparison of WaLRUS-Wavelets, HiPPO-Legendre, and HiPPO-Fourier for peak detection with the translated measure. WaLRUS shows a significant advantage in successfully remembering singularities over HiPPO SSMs. + +$\mathrm{LegS} < \mathrm{FouS}$ . To highlight this result, the percentage of instances where each SSM had the best performance is also provided in Table 1. + +The representative power of WaLRUS is attributed to its ability to minimize truncation and mixing errors by selecting frames that capture signal characteristics with higher fidelity. See [17] for further details. + +# 4.1.3 Peak detection with the scaled measure + +In this experiment, we aim to detect the locations of random spikes in input sequences using Scaled-WaLRUS, FouS, and LegS, all constructed with an equal sizes. We generate random spike sequences, add Gaussian noise $(\mathrm{SNR} = 0.001)$ , and compute their representations with Daubechies wavelets, Legendre polynomials, and Fourier series. The reconstructed signals are transformed into wavelet coefficients, and spike locations are identified following the method in [30]. + +To evaluate performance, we compare the relative amplitude and displacement of detected spikes with their ground truth (see Fig.7). This process is repeated for 1000 random sequences, each containing 10 spikes. Table 2 summarizes the average number of undetected spikes for each SSM and the instance-wise win percentage, representing the number of instances where each SSM had fewer or equal misses peaks than the other SSMs. Note that these percentages do not sum to 100, as some instances result in identical spike detection across all models. + +As shown in Table 2, WaveS misses significantly fewer spikes than FouS and LegS, with lower displacement errors and reduced amplitude loss. Figure 1 illustrates an example where WaLRUS successfully captures closely spaced spikes that are missed by LegS and FouS, demonstrating its superior time resolution. + +![](images/4f1def6e8966af4f3fcc96319bbdfa41dd3c068cfe652f9a0e90af14e012f14a.jpg) +Figure 7: Illustration of the metrics to evaluate performance of SSMs on different datasets in Table 2. + +![](images/4e775dc52e2e4e464c9e6d1fc244de04bfc9169608bf3123ae892d14e49dcf0a.jpg) + +![](images/a1adf01ad3f3db2275d4937a9d6b9cab502dc0c2375a7d15f299957a084e448c.jpg) + +![](images/a4cea581f419a380000ec595c0cd7a8e5a8fed996f9913d86ccb05fb4b775e18.jpg) + +Figure 8: For each dataset, the median and (0.4, 0.6) quantile of running reconstruction MSE across different instances is demonstrated in different colors for WaveT, LegT, and FouT. WaveT captures information in the input signals with a higher fidelity than LegT and FouT. +![](images/42f6e23f21fee0d3b7448f23fc51fa17c128fbaadc3cb84d509d829f9a3b0a67.jpg) +Basis: —— WaveT —— LegT —— FouT + +![](images/95b138a41c7d66f37138beb9d794e2932dd6eda78d0e9921703170b93d0ca596.jpg) + +![](images/995f1fc66e18e87dcc649613b14f3c845eff4a23adc1324bb130956a5a38512e.jpg) + +# 4.1.4 Function approximation with the translated measure + +In this experiment, we construct WaveT, LegT, and FouT SSMs, all with equal effective sizes (see Appendix A.2.4). The chosen effective sizes are smaller than those we used for the scaled measure since the translated window contains lower frequency content within each window, making it possible to reconstruct the signal with smaller frames. Then, for each instance of input signal, the reconstruction MSE at each time step is calculated and plotted in Fig. 8. + +For each input signal instance, we compute the running MSE at each time step, as shown in Fig. 8. This plot represents how the MSE evolves over time across multiple instances, providing a comparison of running MSEs for each SSM. The results demonstrate that Translated-WaLRUS consistently achieves slightly better fidelity than LegT and significantly outperforms FouT across all datasets. + +As discussed in Section 3.3, the reconstruction error stems from two main factors: (1) non-idealities in the translated SSM kernel, affecting its ability to retain relevant information within the window while effectively forgetting data outside it (see Fig. 4), and (2) the extent to which these fundamental non-idealities are activated by the input signal. For example, signals with large regions of zero values are less impacted by kernel inaccuracies, as the weights outside the kernel contribute minimally to reconstruction. + +WaveT achieves a modest, and in some cases negligible MSE improvement over LegT (e.g., M4 and Blocks). However, the kernel-based limitations highlighted in Section 3.3 may have a more pronounced effect on longer sequences or different datasets. + +# 4.1.5 Peak detection with the translated measure + +In this experiment, we evaluate the ability of WaveT, FouT, and LegT to retain information about singularities in signals, following the setup in Section 4.1.3, but with a translated SSM. We generate 2,000 random sequences, each containing 20 spikes. The average number of undetected spikes for each SSM, along with instance-wise win percentages, is reported in Table 2. As in the scaled measure experiment, the percentages do not sum to 100 due to ties across SSMs. Table 2 shows that WaveT consistently outperforms FouT and LegT, with fewer missed peaks, reduced displacement, and less amplitude loss. + +# 5 Limitations + +In this work we have implemented only one type of wavelet (Daubechies-22), as our purpose is to introduce practical and theoretical reasons to replace polynomial SSMs with wavelet SSMs. Other wavelets (biorthogonal, coiflets, Morlets, etc.) could also be used, with some caveats. First, we require a differentiable frame [17], so nondifferentiable wavelets like Haar wavelets or other lower-order Daubechies and Coiflets cannot be used with this method. Second, the redundancy of the frame (and the resulting $N_{\mathrm{eff}}$ of the $A$ matrix) depends on the shape of the wavelet's function and the chosen shifts and scales of this function. Other wavelet types, and other choices of shift and scale, may exhibit better or worse performance and dimensionality reduction, and this is an important question for future work. + +Additionally, we emphasize that the choice of frame is application-dependent. If the signal is known to be smooth and periodic, a wavelet-based SSM is not likely to outperform a Fourier-based SSM, for example. The introduction of WaLRUS is not intended to be a one-size-fits-all model, but rather a broadly-applicable tool that combines compressive online function-approximation SSMs with the expressive power of wavelets. + +# 6 Conclusions + +We have demonstrated in this paper how function approximation with SSMs, initially proposed by [9] and subsequently extended to general frames, can be improved using wavelet-based SSMs. SSMs constructed with wavelet frames can provide higher fidelity in signal reconstruction than the state-of-the-art Legendre and Fourier-based SSMs over both scaled and translated measures. Future work will explore alternate wavelet families, and the trade-offs in effective size, frequency space coverage, and representation capabilities of different frames. + +Moreover, since the Legendre-based HiPPO SSM forms the core of S4 and Mamba, and WaLRUS provides a drop-in replacement for HiPPO, WaLRUS could be used to initialize SSM-based machine learning models—potentially providing more efficient training. As AI becomes ubiquitous, and the demand for computation explodes, smarter and more task-tailored ML architectures can help mitigate the strain on energy and environmental resources. + +# Acknowledgments + +Special thanks to T. Mitchell Roddenberry for fruitful conversations and insights. This work was supported by NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N00014-18-12571, N00014-20-1-2534, N00014-18-1-2047, and MURI N00014-20-1-2787; AFOSR grant FA9550-22-1-0060; DOE grant DE-SC0020345; and ONR grant N00014-18-1-2047. Additional support was provided by a Vannevar Bush Faculty Fellowship, the Rice Academy of Fellows, and Rice University and Houston Methodist 2024 Seed Grant Program. + +# References + +[1] Nikola Zubić, Mathias Gehrig, and Davide Scaramuzza. State space models for event cameras. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024. +[2] Sina Alemohammad, Hossein Babaei, Randall Balestriero, Matt Y. Cheung, Ahmed Imtiaz Humayun, Daniel LeJeune, Naiming Liu, Lorenzo Luzi, Jasper Tan, Zichao Wang, and Richard G. Baraniuk. Wearing a mask: Compressed representations of variable-length sequences using recurrent neural tangent kernels. In 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2950-2954, 2021. +[3] Eric Nguyen, Karan Goel, Albert Gu, Gordon Downs, Preey Shah, Tri Dao, Stephen Baccus, and Christopher Ré. S4ND: Modeling images and videos as multidimensional signals with state spaces. In Advances in Neural Information Processing Systems, 2022. +[4] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735-1780, 1997. + +[5] Jeffrey L. Elman. Finding structure in time. Cognitive Science, 14(2):179-211, 1990. +[6] Mike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681, 1997. +[7] Alex Graves and Jürgen Schmidhuber. Frameworkwise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks, 18(5):602-610, 2005. +[8] Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. +[9] Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, and Christopher Ré. HiPPO: Recurrent memory with optimal polynomial projections. In Advances in Neural Information Processing Systems, 2020. +[10] Albert Gu, Karan Goel, and Christopher Re. Efficiently modeling long sequences with structured state spaces. In International Conference on Learning Representations, 2022. +[11] Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023. +[12] Albert Gu, Isys Johnson, Aman Timalsina, Atri Rudra, and Christopher Re. How to train your HiPPO: State space models with generalized orthogonal basis projections. In International Conference on Learning Representations, 2023. +[13] Ankit Gupta, Albert Gu, and Jonathan Berant. Diagonal state spaces are as effective as structured state spaces. In Advances in Neural Information Processing Systems, 2024. +[14] Albert Gu, Karan Goel, Ankit Gupta, and Christopher Ré. On the parameterization and initialization of diagonal state space models. In Advances in Neural Information Processing Systems, 2022. +[15] Jimmy T.H. Smith, Andrew Warrington, and Scott Linderman. Simplified state space layers for sequence modeling. In International Conference on Learning Representations, 2023. +[16] Ramin Hasani, Mathias Lechner, Tsun-Hsuan Wang, Makram Chahine, Alexander Amini, and Daniela Rus. Liquid structural state-space models. In International Conference on Learning Representations, 2023. +[17] Hossein Babaei, Mel White, Sina Alemohammad, and Richard G Baraniuk. SaFARi: State-space models for frame-agnostic representation. arXiv preprint arXiv:2505.08977, 2025. +[18] Ingrid Daubechies. Ten lectures on wavelets. SIAM Press, 1992. +[19] Alan V Oppenheim. Discrete-Time Signal Processing. Pearson, 1999. +[20] Agostino Abbate, Casimer DeCusatis, and Pankaj K Das. Wavelets and Subbands: Fundamentals and Applications. Springer, 2012. +[21] George EP Box, Gwilym M Jenkins, Gregory C Reinsel, and Greta M Ljung. Time Series Analysis: Forecasting and Control. John Wiley & Sons, 2015. +[22] John G Proakis. Digital Signal Processing: Principles, Algorithms, and Applications. Pearson, 2001. +[23] Paolo Prandoni and Martin Vetterli. Signal Processing for Communications. EPFL Press, 2008. +[24] Guofeng Zhang, Tongwen Chen, and Xiang Chen. Performance recovery in digital implementation of analogue systems. SIAM Journal on Control and Optimization, 45(6):2207-2223, 2007. +[25] Patrice Abry, Patrick Flandrin, and Murad S. Taqqu. Self-similarity and long-range dependence through the wavelet lens. In Paul Doukhan, George Oppenheim, and Murad S. Taqqu, editors, Theory and Applications of Long-Range Dependence, pages 527–556. Birkhäuser, 2003. + +[26] Alfred Haar. Zur Theorie der Orthogonalen Funktionensysteme. PhD thesis, University of Göttingen, 1909. +[27] A. Grossmann and J. Morlet. Decomposition of Hardy functions into square integrable wavelets of constant shape. SIAM Journal on Mathematical Analysis, 15(4):723-736, 1984. +[28] Ingrid Daubechies. Orthonormal bases of compactly supported wavelets. Communications on Pure and Applied Mathematics, 41(7):909-996, 1988. +[29] Stéphane Mallat. A Wavelet Tour of Signal Processing: The Sparse Way. Academic Press, 3rd edition, 2008. +[30] Stephane Mallat and Wen Liang Hwang. Singularity detection and processing with wavelets. IEEE Transactions on Information Theory, 38(2):617-643, 1992. +[31] Tianpei Zhang, Yiming Zhu, Jufeng Zhao, Guangmang Cui, and Yuchen Zheng. Exploring state space model in wavelet domain: An infrared and visible image fusion network via wavelet transform and state space model, 2025. +[32] Wenbin Zou, Hongxia Gao, Weipeng Yang, and Tongtong Liu. Wave-mamba: Wavelet state space model for ultra-high-definition low-light image enhancement. In Proceedings of the 32nd ACM International Conference on Multimedia, page 1534-1543. Association for Computing Machinery, 2024. +[33] O Christensen. An Introduction to Frames and Riesz Bases. Birkhauser, 2003. +[34] Karlheinz Grochenig. Foundations of Time-Frequency Analysis. Springer, 2001. +[35] Michael Elad and Michal Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image processing, 15(12):3736-3745, 2006. +[36] Spyros Makridakis, Evangelos Spiliotis, and Vassilios Assimakopoulos. The M4 competition: 100,000 time series and 61 forecasting methods. International Journal of Forecasting, 36(1):54-74, 2020. +[37] Pete Warden. Speech commands: A dataset for limited-vocabulary speech recognition. arXiv preprint arXiv:1804.03209, 2018. +[38] David L Donoho and Iain M Johnstone. Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81(3):425-455, 1994. + +# A Appendix + +# A.1 SaFARi derivation for arbitrary frame + +Where HiPPO [9] provided closed-form solutions to construct $A$ and $B$ for a few polynomial bases, SaFARi [17] introduced a method to build $A$ and $B$ from any arbitrary frame. The derivations provided below follow [17], and are given here as convenient reference for the reader. + +Take a signal $f$ and frame $\psi$ . To get a vector of weights representing a signal on a basis, we use the inner product: + +$$ +c _ {n} = \int f (t) \overline {{\phi (t)}} d t \tag {A.1} +$$ + +So at some time $T$ , we scale the magnitude of $f(t)$ and stretch the basis to match the length of $f$ : + +$$ +c _ {n} (T) = \int_ {t _ {0}} ^ {T} f (t) \left(\frac {1}{T - t _ {0}}\right) \overline {{\phi \left(\frac {t - t _ {0}}{T - t _ {0}}\right)}} d t \tag {A.2} +$$ + +We are actually interested in the change in $c$ . We will take the partial derivative with respect to $T$ , since the coefficients update at each new time $T$ . Call the start time $t_0$ : this is 0 for the scaling case, and $t_0$ varies with the windowed case. If we call the size of the window $\theta$ , then $t_0 = T - \theta$ . The derivation below will be a generic version, then we will separate the two cases. + +$$ +\frac {d}{d T} c _ {n} (T) = \frac {d}{d T} \int_ {t _ {0}} ^ {T} f (t) \left(\frac {1}{T - t _ {0}}\right) \overline {{\phi \left(\frac {t - t _ {0}}{T - t _ {0}}\right)}} d t \tag {A.3} +$$ + +We note that this is the partial derivative of an integral bounded by two variables. Thus we call on Leibniz' integration rule and find: + +$$ +\begin{array}{l} \frac {d}{d T} c _ {n} (T) = f (T) \left(\frac {1}{T - t _ {0}}\right) \overline {{\phi (1)}} \frac {\delta}{\delta T} (T) - f (t _ {0}) \left(\frac {1}{T - t _ {0}}\right) \overline {{\phi (0)}} \frac {\delta}{\delta T} (t _ {0}) \\ + \int_ {t _ {0}} ^ {T} f (t) \underbrace {\frac {\delta}{\delta T} \left[ \left(\frac {1}{T - t _ {0}}\right) \overline {{\phi \left(\frac {t - t _ {0}}{T - t _ {0}}\right)}} \right]} _ {\overline {{h (t)}}} d t \tag {A.4} \\ \end{array} +$$ + +Some manipulation of the $h(t)$ term yields: + +$$ +\begin{array}{l} h (t) = \left(\frac {1}{T - t _ {0}}\right) \left[ - \frac {\delta (t _ {0})}{\delta T} \left(\frac {1}{T - t _ {0}}\right) \phi^ {\prime} \left(\frac {t - t _ {0}}{T - t _ {0}}\right) - \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \left(\frac {t - t _ {0}}{T - t _ {0}}\right) \phi^ {\prime} \left(\frac {t - t _ {0}}{T - t _ {0}}\right) \right] \\ - \left(\frac {1}{T - t _ {0}}\right) \left[ \left(\frac {1 - \frac {\delta (t _ {0})}{\delta T}}{T - t _ {0}}\right) \phi \left(\frac {t - t _ {0}}{T - t _ {0}}\right) \right] \tag {A.5} \\ \end{array} +$$ + +Our $h(t)$ term now has the derivative of our basis $(\phi')$ in it, but we'd like to be able to combine combine terms with $\phi$ . Therefore we can make a mapping from $\phi' \rightarrow \phi$ using the dual, $\widetilde{\phi}$ : + +$$ +\phi^ {\prime} \left(\frac {t - t _ {0}}{T - t _ {0}}\right) = \underbrace {\left\langle \phi^ {\prime} \left(\frac {t - t _ {0}}{T - t _ {0}}\right) , \widetilde {\phi} \left(\frac {t - t _ {0}}{T - t _ {0}}\right) \right\rangle} _ {P} \phi \left(\frac {t - t _ {0}}{T - t _ {0}}\right) \tag {A.6} +$$ + +Likewise: + +$$ +\left(t - t _ {0}\right) \phi^ {\prime} \left(\frac {t - t _ {0}}{T - t _ {0}}\right) = \underbrace {\left\langle \left(t - t _ {0}\right) \phi^ {\prime} \left(\frac {t - t _ {0}}{T - t _ {0}}\right) , \widetilde {\phi} \left(\frac {t - t _ {0}}{T - t _ {0}}\right) \right\rangle} _ {P _ {t}} \phi \left(\frac {t - t _ {0}}{T - t _ {0}}\right) \tag {A.7} +$$ + +This lets us do another simplification of $h(t)$ , and group all the functions of $\phi$ . Let's also call $T - t_0 = \theta$ to save some space. + +$$ +h (t) = \frac {1}{\theta} \phi \left(\frac {t - t _ {0}}{T - t _ {0}}\right) \left[ - \frac {\delta (t _ {0})}{\delta T} \frac {1}{\theta} P - \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \frac {1}{\theta} P _ {t} - \frac {1}{\theta} \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \right] \tag {A.8} +$$ + +Now we can return to Eq. A.4. P is not a function of $t$ , so it can be moved outside the integral. For the measures we are looking at, $\frac{d}{dT}$ is always constant with respect to $t$ – it is either 0 or 1. We can substitute then group as follows: + +$$ +\frac {d}{d T} c _ {n} (T) = \left(\frac {1}{T - t _ {0}}\right) \left[ f (T) \overline {{\phi (1)}} - f (t _ {0}) \overline {{\phi (0)}} \frac {\delta}{\delta T} (t _ {0}) \right] + +$$ + +$$ +\left(\frac {1}{T - t _ {0}}\right) \left[ - \frac {\delta (t _ {0})}{\delta T} \bar {P} - \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \bar {P} _ {t} - \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \right] \underbrace {\int_ {t _ {0}} ^ {T} f (t) \left(\frac {1}{T - t _ {0}}\right) \overline {{\phi \left(\frac {t - t _ {0}}{T - t _ {0}}\right)}} d t} _ {c (T)} \tag {A.9} +$$ + +Noting that the final term in this equation contains Eq. A.2, we can simplify further: + +$$ +\frac {d}{d T} c _ {n} (T) = \left(\frac {1}{T - t _ {0}}\right) \left[ f (T) \overline {{\phi (1)}} - f (t _ {0}) \overline {{\phi (0)}} \frac {\delta (t _ {0})}{\delta T} \right] + \tag {A.10} +$$ + +$$ +\left(\frac {1}{T - t _ {0}}\right) c (T) \left[ \frac {- \delta (t _ {0})}{\delta T} \overline {{P}} - \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \overline {{P _ {t}}} - \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \right] +$$ + +Unfortunately, we still have a term $f(t_0)$ that we don't have access to; this is the value of the function at the start of our window. But we have not stored this value; that would defeat the point of an online update in the first place. Instead, we will approximate it based on our current coefficient vector and our known basis. + +$$ +c = \langle \phi , f \rangle +$$ + +$$ +f = \langle \widetilde {\phi}, c \rangle +$$ + +$$ +f (t _ {0}) = \langle \tilde {\phi} (0), c (T) \rangle +$$ + +We now have an update rule for $c$ that depends only on the frame $\phi$ , the current value of $c(T)$ , and the new information from the signal, $f(T)$ : + +$$ +\begin{array}{l} \frac {d}{d T} c (T) = \left(\frac {1}{T - t _ {0}}\right) \left[ f (T) \overline {{\phi (1)}} - \widetilde {\phi} (0) c (T) \overline {{\phi (0)}} \frac {\delta (t _ {0})}{\delta T} \right] \tag {A.11} \\ \left. - \left(\frac {1}{T - t _ {0}}\right) \left[ c (T) \left[ \frac {\delta (t _ {0})}{\delta T} \bar {P} + \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \bar {P _ {t}} + \left(1 - \frac {\delta (t _ {0})}{\delta T}\right) \right] \right] \right. \\ \end{array} +$$ + +# A.1.1 The scaled case + +In the case of scaling, $t_0 = 0$ and $\frac{\delta}{\delta T} (t_0) = 0$ + +$$ +\begin{array}{l} \frac {d}{d T} c _ {n} (T) = \left(\frac {1}{T}\right) \left[ f (T) \overline {{\phi (1)}} - \widetilde {\phi} (0) c (T) \overline {{\phi (0)}} \frac {\delta \left(t _ {0}\right)}{\delta T} \right] ^ {0} (A.12) \\ - \left(\frac {1}{T}\right) c (T) \left[ \frac {\delta \left(t _ {0}\right)}{\delta T} \overrightarrow {P} + \left(1 - \frac {\delta \left(t _ {0}\right)}{\delta T}\right) \overrightarrow {P _ {t}} + \left(1 - \frac {\delta \left(t _ {0}\right)}{\delta T}\right) ^ {0} \right] (A.13) \\ \end{array} +$$ + +$$ +\frac {d}{d T} c _ {n} (T) = \left(\frac {1}{T}\right) f (T) \overline {{\phi (1)}} - \left(\frac {1}{T}\right) c (T) \left(\bar {P _ {t}} + 1\right) \tag {A.14} +$$ + +The A matrix acts on the coefficient vector $c$ , and B acts on the current input, $f(T)$ . Expressed in matrix notation: + +$$ +\frac {d}{d T} c _ {n} (T) = - \frac {1}{T} \underbrace {\left(\bar {P _ {t}} + I\right)} _ {A} c (T) + \frac {1}{T} \underbrace {\phi (1)} _ {B} f (T) \tag {A.15} +$$ + +Equivalently, + +$$ +\frac {d}{d T} c _ {n} (T) = - \frac {1}{T} \underbrace {\left(\left\langle \widetilde {\phi} \left(\frac {t}{T}\right) , t \phi \left(\frac {t}{T}\right) ^ {\prime} \right\rangle + I\right)} _ {A} c (T) + \frac {1}{T} \underbrace {\overline {{\phi (1)}}} _ {B} f (T) \tag {A.16} +$$ + +# A.1.2 The translated case + +Now $T - t_0 = \theta$ where $\theta$ is the window size, and $\frac{\delta}{\delta T}(t_0) = 1$ . Following the same procedure as the previous section: + +$$ +\frac {d}{d T} c _ {n} (T) = \left(\frac {1}{\theta}\right) f (T) \overline {{\phi (1)}} - \left(\frac {1}{\theta}\right) c (T) \left[ \widetilde {\phi} (0) \overline {{\phi (0)}} + \bar {P} \right] \tag {A.17} +$$ + +$$ +\frac {d}{d T} c _ {n} (T) = - \frac {1}{\theta} \underbrace {\left(\bar {P} + \tilde {\phi} (0) \overline {{\phi (0)}}\right)} _ {A} c (T) + \frac {1}{\theta} \underbrace {\overline {{\phi (1)}}} _ {B} f (T) \tag {A.18} +$$ + +$$ +\frac {d}{d T} c _ {n} (T) = - \frac {1}{\theta} \underbrace {\left(\left\langle \widetilde {\phi} \left(\frac {t}{\theta}\right) , \phi^ {\prime} \left(\frac {t}{\theta}\right) \right\rangle + \widetilde {\phi} (0) \overline {{\phi (0)}}\right)} _ {A} c (T) + \frac {1}{\theta} \underbrace {\overline {{\phi (1)}}} _ {B} f (T) \tag {A.19} +$$ + +# A.2 Experiments + +# A.2.1 Datasets + +In this paper, we conducted our experiments on these datasets: + +M4 forecasting competition: The M4 forecasting competition dataset [36] consists of 100,000 univariate time series from six domains: demographic, finance, industry, macro, micro, and other. The data covers various frequencies (hourly, daily, weekly, monthly, quarterly, yearly) and originates from sources like censuses, financial markets, industrial reports, and economic surveys. It is designed to benchmark forecasting models across diverse real-world applications, accommodating different horizons and data lengths. We test on 3,000 random instances. + +Speech commands: The speech commands dataset [37] is a set of 400 audio files, each containing a single spoken English word or background noise with about one second duration. These words are from a small set of commands, and are spoken by a variety of different speakers. This data set is designed to help train simple machine learning models. + +Wavelet benchmark collection: Donoho [38] introduced a collection of popular wavelet benchmark signals, each designed to capture different types of singularities. This benchmark includes well-known signals such as Bumps, Blocks, Spikes, and Piecewise Polynomial. Following this model, we synthesize random signals belonging to the classes of bumps, blocks, spikes, and piecewise polynomials. Details and examples of these signals can be found in Appendix A.2.2. + +# A.2.2 Wavelet Benchmark Collection + +Donoho [38] introduced a collection of popular wavelet benchmark signals, each designed to capture different types of singularities. This benchmark includes well-known signals such as Bumps, Blocks, Spikes, and Piecewise Polynomial. + +Following this model, we synthesize random signals belonging to the classes of bumps, blocks, spikes, and piecewise polynomials in our experiments to compare the fidelity of DaubS to legS and fouS, and also to compare the fidelity of DaubT to LegT and FouT. + +Figure 9 demonstrates a random instance from each of the classes of the signals that we have in our wavelet benchmark collection. + +![](images/a9b4183ba0947f1aaef108fb3ee0c48c35508d4e33f185a58a814717652ff63d.jpg) + +![](images/5a0b0f60fe9142f79f883207e6bb091a17720bb6abc9f484e6064f1c4af171c5.jpg) + +![](images/1656b4f3e4984f7c5e66ef6f32117f27d79f51480d1510c8b51a8b3978b8793c.jpg) +Figure 9: Instances of different signal types in the wavelet benchmark collection. Top Left: Blocks is a piecewise constant signal with random-hight sharp jumps placed randomly. Top Right: Bumps is a collection of random pulses where each pulse contains a cusp. Bottom Left: Piecepoly is a piecewise polynomial signal with discontinuity in the transition between different polynomial parts. Bottom Right: Spikes is a collection of rectangular pulses placed randomly with random positive hieght. + +![](images/b22dba8efaecdcd3796145e4690b55870bc3418addf7c8c89c2514c35180dda5.jpg) + +# A.2.3 Description of metrics for 'Spikes' and 'Bumps' experiments + +- Peaks Missed The number of true peaks in the signal is $N_{tp}$ , and the number of detected peaks (that is, where the estimated signal surpasses an amplitude threshold $Th_{amp}$ ), is $N_{dp}$ . $N_{dp|tp}$ is the number of detected peaks where a true peak is also within a displacement threshold $(Th_{dis})$ of the detected peak. + +$$ +\text{Peaks Missed} = \left(1 - \frac {N _ {d p \mid t p}}{N _ {t p}}\right) \times 100 \% +$$ + +- False Peaks The metric False Peaks is calculated as the percentage of detected peaks that occurred when there was not a true peak within the displacement threshold. The number of detected peaks when there was no true peak is represented by $N_{dp|\overline{tp}}$ . + +$$ +\text{False Peaks} = \frac {N _ {d p} \left| \overline {{t p}} \right.}{N _ {d p}} \times 100 \% +$$ + +- Instance-wise Wins In each of $K$ time-series instances $S$ , Each SSM $m$ gets the instance win over other SSM models if it captures more true peaks than the other models. + +$$ +\text{Instance-wise Wins} = \frac {1}{K} \sum_ {k = 1} ^ {K} w _ {k} \times 100 \% +$$ + +$$ +w _ {k} = \left\{ \begin{array}{l l} 1, & \text {i f} \quad \text {P e a k s M i s s e d} _ {m} \leq \text {P e a k s M i s s e d} _ {\text {o t h e r s}}, \\ 0, & \text {O w}. \end{array} \right. +$$ + +In cases where multiple models achieve the same maximum, each tied model receives the credit for that time series instance. As a result, the sum of instance-wise wins for different SSMs may exceed 1.00. + +- Relative Amplitude Error The relative amplitude error is calculated as the average percent error in the estimated amplitude of detected peaks, including false peaks. + +$$ +\text{Relative Amplitude Error} = \frac {1}{N _ {d p}} \left(\sum_ {n = 1} ^ {N _ {d p} | t p} \frac {\left| A _ {t p , n} - A _ {d p} | t p , n \right|}{A _ {t p , n}}\right) \times 100 \% +$$ + +- Average Displacement The location of a detected peak where a true peak was within a displacement threshold is given by $X_{dp|tp}$ . The location of the true peak is denoted as $X_{tp}$ . + +$$ +\text {A v e r a g e D i s p l a c e m e n t} = \frac {1}{N _ {d p}} \sum_ {n = 1} ^ {N _ {d p}} \left| X _ {t p, n} - X _ {d p | t p, n} \right| +$$ + +# A.2.4 Wavelet frames used for each experiment + +Unlike HiPPO-based SSMs, which are fully characterized by their state size $N$ , WaLRUS employs redundant wavelet frames that require additional parameters for identification. Once the wavelet frame is defined, the SaFARi framework constructs the unique $A$ , $B$ matrices corresponding to that frame. The key parameters for specifying a redundant wavelet frame in WaLRUS are as follows: + +- Wavelet Function: Wavelet frames are built from a mother wavelet and a father wavelet, which capture high-frequency details and low-frequency approximations, respectively. Different families such as Daubechies, Morlet, Symlet, and Coifflet provide varied wavelet functions. For this work, we use the D22 wavelet from the Daubechies family. +- L (Frame Length): This represents the length of the wavelet frame. Increasing $L$ increases numerical accuracy in the calculation of the $A$ and $B$ matrices at the cost of additional computation time. However, this initial computation need only be performed once, so it is best to choose a large $L$ . For the experiments in this work, we set $L = 2^{19}$ . +- Scale min and $N_{\mathrm{eff}}$ : The minimum scale sets the smallest feature of the signal that can be represented by the frame. This parameter should be chosen based on knowledge about the signal of interest and its component frequencies. Note that the size of the smallest feature is relative to the length of the signal under consideration, so this value may differ under scaling and translating measures. + +For wavelets, scale min also controls the effective rank, $N_{\mathrm{eff}}$ . Each new lower scale introduces a factor of two in the effective rank of the frame, owing to the additional shifted elements in each scale. Fig. 3 shows two scales, where there are 3 father wavelets $(\phi_0)$ and 3 coarse-scale mother wavelets $(\psi_1)$ . The next scale introduces 6 scaled and shifted mother wavelets $(\psi_2)$ , the next would include 12, and so on. Table 3 also illustrates this pattern, with scale min of 0 corresponding to $N_{\mathrm{eff}}$ of $2^{6}$ , scale min of -1 corresponding to $N_{\mathrm{eff}}$ of $2^{7}$ , and so on, with some margin of error for numerical accuracy and truncation. + +Our code includes another variable, scale max. Since smaller scales can also combine to represent larger scales, scale max in fact has no impact on $N_{\mathrm{eff}}$ (see [29] for further information). Fig. 10 demonstrates on an example implementation that varying scale max does not impact the size of $N_{\mathrm{eff}}$ . It is also easily shown that varying scale max results in the same diagonalized A; see our code supplement. Adding coarser scales can help improve numerical accuracy in the calculation of A, however. We do not include scale max in Table 3, but we do provide it in our code with each experiment for reproducibility. + +![](images/8889b545668b793f9fb4f36491f8a9c702f8a81b4402f005bff09551866e1997.jpg) +Figure 10: Effective Rank of WaLRUS $A$ matrix with Scale Min=-3, shift=0.01 + +- Shift: At scale $i$ , $2^{-i}m$ overlapping shifts are applied to the wavelets, where $0 < m \leq 1$ is a shift constant. Setting $m = 1$ corresponds to dyadic shifts. As our wavelet frames typically + +
ExperimentBasis/Measurescale minshift\(N_{\text{eff}}\)
Scaled M4WaveS-30.01501
LegS--500
FouS--500
Scaled SpeechWaveS-50.011995
LegS--1995
FouS--1995
Scaled syntheticWaveS-30.01501
LegS--500
FouS--500
Scaled peak detectionWaveS00.0165
LegS--65
FouS--65
Translated M4WaveT-10.01128
LegT--128
FouT--128
Translated SpeechWaveT-30.0025500
LegT--500
FouT--500
Translated syntheticWaveT-10.01128
LegT--128
FouT--128
Translated peak detectionWaveT00.0165
LegT--65
FouT--65
+ +Table 3: Parameters for the redundant wavelet frame used by WaLRUS in different experiments. All of the above experiment share the parameters $L = 2^{19}$ , and rcond = 0.01. + +only contain a few dilation levels, using $m = 1$ can mean that the constructed set of vectors no longer satisfies the frame condition, and is lossy. We choose a small value (0.01 for most experiments), and tune this as needed. + +- rcond: This parameter controls the numerical stability of the pseudo-inverse calculation for the dual frame. Singular values smaller than $\operatorname{rcond} \times \sigma_{\max}$ are discarded during the inversion process to maintain numerical stability. + +Note that all the above parameters are solely to identify the redundant wavelet frame, and that WaLRUS does not introduce any new parameters. Table 3 summarizes the settings for all experiments, alongside the SSM sizes for HiPPO-Legendre and HiPPO-Fourier. + +# A.2.5 Computational resources + +Within the scope of this paper, no networks were trained and no parameters were learned. Only CPU resources were utilized, but speed could be improved with parallel resources on a GPU. Using WaLRUS to find representation has two different stages: + +- Pre-computing: Computing SSM $A$ matrices and diagonalizing them. This step can be computationally intensive, but need only be calculated once. +- Computation: Using SSM $A$ matrices to find representations of signals. + +For all our experiments except Scaled-Speech, the pre-computing stage takes less than 10 minutes. For scaled-speech, the pre-compute time is on the order of hours. Once the A matrices are computed and stored, run time is the same for all experiments. + +# NeurIPS Paper Checklist + +# 1. Claims + +Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? + +Answer: [Yes] + +Justification: Our abstract and introduction states that we introduce the use of wavelets in state-space models for online function representation, and show how these can outperform state-of-the-art polynomial models for certain data types. Section 3 describes the construction of wavelet-based SSMs, and section 4 experimentally supports our performance claims. + +Guidelines: + +- The answer NA means that the abstract and introduction do not include the claims made in the paper. +- The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. +- The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. +- It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. + +# 2. Limitations + +Question: Does the paper discuss the limitations of the work performed by the authors? + +Answer: [Yes] + +Justification: Section 5 describes limitations, both in terms of what we have implemented in this work, as well as limitations in the use of our method. + +Guidelines: + +- The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. +- The authors are encouraged to create a separate "Limitations" section in their paper. +- The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. +- The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. +- The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. +- The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. +- If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. +- While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. + +# 3. Theory assumptions and proofs + +Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? + +Answer: [Yes] + +Justification: All necessary theoretical background is given in Sec. 2 and full support for our results are in sections 3-4. + +Guidelines: + +- The answer NA means that the paper does not include theoretical results. +- All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced. +- All assumptions should be clearly stated or referenced in the statement of any theorems. +- The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. +- Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. +- Theorems and Lemmas that the proof relies upon should be properly referenced. + +# 4. Experimental result reproducibility + +Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? + +Answer: [Yes] + +Justification: The Experiments section thoroughly describes what metrics were tested and how they were evaluated, as well as the publicly available datasets used. Scripts to replicate the experimental results are available at https://github.com/echbaba/walrus + +Guidelines: + +- The answer NA means that the paper does not include experiments. + +- If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. + +- If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. + +- Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. + +- While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example + +(a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. +(b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. +(c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). +(d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in + +some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. + +# 5. Open access to data and code + +Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? + +Answer: [Yes] + +Justification: Code and data are available at https://osf.io/7kjcx/?view_only=5dc38b9776624deb9d1c0d8f88108658 + +Guidelines: + +- The answer NA means that paper does not include experiments requiring code. +- Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details. +- While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). +- The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details. +- The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. +- The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. +- At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). +- Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. + +# 6. Experimental setting/details + +Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? + +Answer: [Yes] + +Justification: All the required information on both the datasets, and the exact experimental setting required to recreate the wavelet frame, are provided in the Appendix. This information can also be found in our code. + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. +- The full details can be provided either with the code, in appendix, or as supplemental material. + +# 7. Experiment statistical significance + +Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? + +Answer: [Yes] + +Justification: Error bars and quantiles are provided in figures 5 and 7, and explanations of their source are in the text and captions of the figures. Since MSE is not normally distributed, we chose to use quantiles and percentiles to reflect the distribution more accurately. We also provide tables 1 and 2 to describe additional nuances of the comparison data. + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. +- The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). +- The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) +- The assumptions made should be given (e.g., Normally distributed errors). +- It should be clear whether the error bar is the standard deviation or the standard error of the mean. +- It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified. +- For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). +- If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. + +# 8. Experiments compute resources + +Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? + +Answer: [Yes] + +Justification: Since our work did not involve any training, no GPU computation was necessary. More discussion is available in the Appendix (Sec. A.2.5). + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. +- The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. +- The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). + +# 9. Code of ethics + +Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines? + +Answer: [Yes] + +Justification: We have conducted this research with integrity and reported our findings with honesty. The link to the Code of Ethics provided is broken, and so we have instead consulted this provisional copy of the document: https://openreview.net/forum?id=zVoy8kAFKPr. + +Guidelines: + +- The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. +- If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. +- The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). + +# 10. Broader impacts + +Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? + +Answer: [Yes] + +Justification: This work is a basic mathematical result that does not have a targeted end use. We do note in our conclusion that improved function approximators, like the one we present here, can reduce the computational resources required for training certain types of neural networks – resources that have recently become a major environmental concern. + +Guidelines: + +- The answer NA means that there is no societal impact of the work performed. +- If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. +- Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. +- The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. +- The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. +- If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). + +# 11. Safeguards + +Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? + +Answer: [NA] + +Justification: This is a foundational and theoretical work that is primarily mathematical in nature: a compressive online approximation of time-series signals over a wavelet frame. The potential use cases for such a tool are similar in scope to that of a Fourier Transform; that is, it is too broad to responsibly hypothesize specific use cases or create guidelines. + +Guidelines: + +- The answer NA means that the paper poses no such risks. +- Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. +- Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. +- We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. + +# 12. Licenses for existing assets + +Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? + +# Answer: [Yes] + +Justification: the M4 dataset does not have a required license: https://paperswithcode.com/dataset/m4. The SpeechCommands dataset has a CC BY license, allowing for unrestricted use, with attribution to the author: https://huggingface.co/datasets/google/speech Commands. The four other data types we test on are generated by code that is made available with this paper, and based on [38]. + +# Guidelines: + +- The answer NA means that the paper does not use existing assets. +- The authors should cite the original paper that produced the code package or dataset. +- The authors should state which version of the asset is used and, if possible, include a URL. +- The name of the license (e.g., CC-BY 4.0) should be included for each asset. +- For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. +- If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. +- For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. +- If this information is not available online, the authors are encouraged to reach out to the asset's creators. + +# 13. New assets + +Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? + +# Answer: [Yes] + +Justification: An implementation of WaLRUS is provided with the code. + +# Guidelines: + +- The answer NA means that the paper does not release new assets. +- Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. +- The paper should discuss whether and how consent was obtained from people whose asset is used. +- At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. + +# 14. Crowdsourcing and research with human subjects + +Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? + +# Answer: [NA] + +Justification: There were no human subjects in this theoretical work. + +# Guidelines: + +- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. +- Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. +- According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. + +# 15. Institutional review board (IRB) approvals or equivalent for research with human subjects + +Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? + +Answer: [NA] + +Justification: There were no study participants in this theoretical work. + +Guidelines: + +- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. +- Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. +- We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. +- For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. + +# 16. Declaration of LLM usage + +Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required. + +Answer: [NA] + +Justification: We have used LLMs only to assist in writing and polishing the grammar. + +Guidelines: + +- The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components. +- Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described. \ No newline at end of file diff --git a/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/images.zip b/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..8fe1242b9c9bcbc2fd5f2c2e84f8497536cbd5fd --- /dev/null +++ b/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:212eae9c82ceeb31a7458b362a5a781db77115accdc696ae2c89e268693ae141 +size 765746 diff --git a/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/layout.json b/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b87021ddd6373dcb7367af2d24346a6708853a31 --- /dev/null +++ b/NeurIPS/2025/WaLRUS_ Wavelets for Long range Representation Using State Space Methods/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:665fc17fc795f3ab4e4213905b76d5f2ea19825e3fab85c4a2c0428db5c659df +size 797415 diff --git "a/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/0f83e49a-64ce-4487-99d8-879796e12187_content_list.json" "b/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/0f83e49a-64ce-4487-99d8-879796e12187_content_list.json" new file mode 100644 index 0000000000000000000000000000000000000000..902a9a6a252dea1f292e736d37586244ed7db5e9 --- /dev/null +++ "b/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/0f83e49a-64ce-4487-99d8-879796e12187_content_list.json" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74bdce8c880e3d78f3c1c38a42a6ec2fa8c1c0fabbc8c68c8ca667fa63396776 +size 126408 diff --git "a/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/0f83e49a-64ce-4487-99d8-879796e12187_model.json" "b/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/0f83e49a-64ce-4487-99d8-879796e12187_model.json" new file mode 100644 index 0000000000000000000000000000000000000000..7909ccc20fc29a0c076f0d92151f22304ad57e36 --- /dev/null +++ "b/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/0f83e49a-64ce-4487-99d8-879796e12187_model.json" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ec2adbebdfbef430f94ec46f07ed4115ed6204da7d5ba0b9d71feb23d1030d5 +size 172035 diff --git "a/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/0f83e49a-64ce-4487-99d8-879796e12187_origin.pdf" "b/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/0f83e49a-64ce-4487-99d8-879796e12187_origin.pdf" new file mode 100644 index 0000000000000000000000000000000000000000..365ff2b403ed24f4f9f25bcdb1dcfbb6cad1e269 --- /dev/null +++ "b/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/0f83e49a-64ce-4487-99d8-879796e12187_origin.pdf" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:37fe7bdfc6c88c88c9ce52a6621d10caa04d95f528e3ba9fa139dfe1b8290323 +size 2884064 diff --git "a/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/full.md" "b/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/full.md" new file mode 100644 index 0000000000000000000000000000000000000000..53ff2464e8f4c8ead64b45a75d1bb6f9f54d7db4 --- /dev/null +++ "b/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/full.md" @@ -0,0 +1,627 @@ +# Walking the Schrödinger Bridge: A Direct Trajectory for Text-to-3D Generation + +Ziying Li + +Zhejiang University emmalee@zju.edu.cn + +Xuequan Lu + +University of Western Australia bruce.lu@uwa.edu.au + +Xinkui Zhao* + +Zhejiang University zhaoxinkui@zju.edu.cn + +Guanjie Cheng + +Zhejiang University chengguanjie@zju.edu.cn + +Shuiguang Deng + +Zhejiang University dengsg@zju.edu.cn + +Jianwei Yin + +Zhejiang University +zjuyjw@cs.zju.edu.cn + +https://github.com/emmalee789/TraCe.git + +![](images/f90da411cc61f71f36a3baf43cacbfb7e2375d7f121233779665feea464c9367.jpg) +Figure 1: From left to right: (a) Standard VSD [45] ( $\text{CFG} = 7.5$ , CFG: Classifier-free Guidance); (b) Standard SDS [35]; (c) VSD [45] ( $\text{CFG} = 20$ ); (d) SDS [35] ( $\text{CFG} = 20$ ); (e) Ours ( $\text{CFG} = 20$ ). VSD with $\text{CFG} = 7.5$ and $\text{CFG} = 20$ both yield low-quality results. Standard SDS yields artifacts (e.g., over-smoothing) with high CFG, and SDS with low CFG yields low-quality results. Our method generates high-quality and high-fidelity results with a fair CFG value. + +![](images/fa6a63a42a91916a9104fb123c3499be2485a9b535fbdf0078d73449fe6e06f3.jpg) + +![](images/7f40c1e7f28860cf2587540947583008f958586ec3496f82d3cf43b4e2baa8d4.jpg) + +![](images/3418e68378272bfd1a38e52a706dd0667164eb0993ffdb37c8ea0057b94c6c60.jpg) + +![](images/3284fe19a89c8d1e17030956f4cd0fdb4b5ca09e433bbb6e2a91ba90db1fd566.jpg) + +# Abstract + +Recent advancements in optimization-based text-to-3D generation heavily rely on distilling knowledge from pre-trained text-to-image diffusion models using techniques like Score Distillation Sampling (SDS), which often introduce artifacts such as over-saturation and over-smoothing into the generated 3D assets. In this paper, we address this essential problem by formulating the generation process as learning an optimal, direct transport trajectory between the distribution of the current rendering and the desired target distribution, thereby enabling high-quality generation with smaller Classifier-free Guidance (CFG) values. At first, we theoretically establish SDS as a simplified instance of the Schrödinger Bridge framework. We prove that SDS employs the reverse process of an Schrödinger + +Bridge, which, under specific conditions (e.g., a Gaussian noise as one end), collapses to SDS's score function of the pre-trained diffusion model. Based upon this, we introduce Trajectory-Centric Distillation (TraCe), a novel text-to-3D generation framework, which reformulates the mathematically trackable framework of Schrödinger Bridge to explicitly construct a diffusion bridge from the current rendering to its text-conditioned, denoised target, and trains a LoRA-adapted model on this trajectory's score dynamics for robust 3D optimization. Comprehensive experiments demonstrate that TraCe consistently achieves superior quality and fidelity to state-of-the-art techniques. + +# 1 Introduction + +Generating three-dimensional content directly from textual descriptions has recently attracted intensive attentions in the research community. Recent methods leveraging explicit 3D representations like Gaussian Splatting have significantly accelerated the generation process [25, 3]. Despite the advancements, it remains a key bottleneck that the quality and fidelity of generated 3D assets often lag behind their 2D counterparts. This limitation is frequently attributed to the scarcity of large-scale, high-quality 3D datasets required for direct supervised training [27, 28, 10]. + +To bridge this gap, many state-of-the-art text-to-3D methods employ optimization strategies guided by powerful, pre-trained 2D text-to-image (T2I) diffusion models [36]. Score Distillation Sampling (SDS) [35] has become the cornerstone paradigms. SDS leverages powerful pre-trained 2D text-to-image diffusion models to guide the optimization of 3D representations. Nevertheless, the standard SDS approach typically requires high values for Classifier-Free Guidance (CFG) [13] to achieve strong text alignment [35, 47, 4, 24, 49]. This reliance on high CFG values is often problematic, leading to visual artifacts such as over-saturation [37] and over-smoothing [23] in the generated 3D assets. Recognizing these issues, several variants of SDS have been proposed recently [45, 29, 17, 44, 48, 11, 6]. However, these SDS-based methods, including the recent variants, face persistent challenges. Firstly, as analyzed in recent studies [45, 1, 24], SDS and its variants fundamentally operate by matching the gradient direction predicted by the T2I model. While differing in their specific source and target choices for computing this gradient, they all rely on score estimates derived from the T2I backbone. These score estimates, however, can be noisy and are not guaranteed to represent an optimal direction for 3D optimization (shown in Figure 2b), potentially causing unexpected artifacts. Secondly, variants designed to operate effectively at lower CFG values (e.g., CFG=7.5), such as Score Distillation via Inversion (SDI) [29] or Variational Score Distillation (VSD) [45], have shown limited success when applied to optimizing certain popular 3D representations like 3D Gaussian Splatting (3DGS), often yielding less-desired results (shown in Figure 1). + +The aforementioned analysis underscores the limitations of existing approaches and highlights the urgent need of a more robust optimization framework for text-to-3D generation, one that does not solely rely on potentially noisy score matching or operate under restrictive guidance conditions. In this paper, we first provide a theoretical insight by establishing that SDS can be understood as a simplified instance of the Schrödinger Bridge framework [39]. We demonstrate (Section 4.1) that SDS implicitly employs the reverse process of an Schrödinger Bridge, which, under specific conditions such as Gaussian noise distribution at one endpoint, effectively collapses to utilizing the score function of the pre-trained diffusion model. This perspective not only clarifies the underlying dynamics of SDS but also illuminates pathways for more principled trajectory design. Based upon this reformulation, we introduce Trajectory-Centric Distillation (TraCe), a novel text-to-3D generation framework. TraCe formulates the mathematically tractable framework of Schrödinger Bridges [26, 26] to explicitly construct and learn a diffusion bridge for text-to-3D generation. This bridge connects the current rendering $(X_{1})$ to its text-conditioned, denoised target $(X_0^{\mathrm{pred}})$ , thereby defining a more stable and direct optimization trajectory (visualization in Figure 2a). TraCe then employs Low-Rank Adaptation (LoRA) [14] to fine-tune the T2I diffusion model specifically for navigating this constructed bridge, enabling it to precisely learn the score dynamics required for robust 3D optimization along this optimal trajectory towards the target distribution. + +Our proposed TraCe framework, which operationalizes the direct transport path via Schrödinger Bridges, is rigorously evaluated. Extensive experiments demonstrate that this approach yields high-fidelity 3D assets with strong adherence to textual descriptions (Figure 4 and Table 1). The results consistently showcase TraCe's capacity to achieve superior visual quality and semantic coherence + +in generated content (Figure 4 and Supplementary), highlighting the efficacy of our theoretically grounded direct trajectory optimization for text-to-3D generation. + +In summary, our contributions are: + +- We establish a novel theoretical connection, demonstrating that SDS can be precisely understood as a special case of the Schrödinger Bridge framework. This reformulation clarifies the underlying transport dynamics implicitly leveraged by SDS. + +- We introduce Trajectory-Centric Distillation (TraCe), a new text-to-3D generation framework. TraCe explicitly learns an optimal transport path, guided by a tractable Schrödinger Bridge formulation, between the current 3D model's rendering and a dynamically estimated, text-aligned target view. This is achieved by constructing and sampling along this explicit diffusion bridge, enabling more direct and stable 3D optimization. + +- Experiments demonstrate that our TraCe achieves high-quality 3D generation, surpassing current state-of-the-art techniques. TraCe exhibits enhanced robustness, particularly excelling in challenging low CFG values where the performance of existing methods typically degrades. + +# 2 Related Work + +Distilling 2D into 3D. Leveraging large-scale, pre-trained text-to-image (T2I) diffusion models [36] as priors has become a prominent technique for generation tasks in data-scarce domains, such as text-to-3D generation. SDS [35] is a seminal approach in this direction, enabling optimization of parametric representations (e.g., Neural Radiance Fields) by distilling knowledge from a 2D diffusion model. To achieve plausible results, it frequently necessitates high Classifier-Free Guidance (CFG) weights [35, 47], which can further exacerbate these issues. However, standard SDS is often susceptible to visual artifacts such as over-saturation [37] and over-smoothing [23]. Moreover, the SDS objective itself, while empirically effective, does not strictly correspond to the gradient of a well-defined probability distribution of the 3D parameters [45, 1, 24], potentially leading to suboptimal optimization paths [17, 44, 29, 48]. To address these limitations, several variants have been proposed. For instance, methods like Variational Score Distillation (VSD) [45] and Classifier Score Distillation (CSD) [48] explore alternative gradient formulations to better approximate the optimization process from source distribution towards target distribution. Other approaches like Score Distillation via Inversion (SDI) [29] tries to better approximate the noise instead of using pure Gaussian noise. These variants can be understood through the lens of approximating an optimal transport path between the current image distribution (source) and the target natural image distribution, and from this perspective, a key difference between these methods lies in how they approximate the score of the source and target distributions [30]. For instance, SDS approximates it using the unconditional score, while VSD attempts a more direct approximation by fine-tuning a LoRA adapter on the current renderings. While these methods offer valuable contributions towards reducing the source distribution mismatch artifacts, they fundamentally rely on adapting gradients derived from pre-trained T2I models. This forces the optimization process to cope with score functions optimized for 2D image generation, which is inherently not optimal for tasks like 3D generation due to the domain gap and differences. Our work differs greatly from these approaches. We establish a novel theoretical connection, demonstrating that SDS can be precisely understood as a specific instantiation of the Schrödinger Bridge framework. This reformulation clarifies the underlying transport dynamics implicitly leveraged by SDS. Built upon this insight, we introduce a method that explicitly constructs and learns a more direct and stable optimization trajectory by framing the process as a tractable Schrödinger Bridge between the current rendering and an estimated text-aligned target, thereby enhancing both the fidelity and robustness of text-to-3D generation. + +Diffusion Models and Schrödinger Bridges. Diffusion models (DMs) [12], also known as Score-based Generative Models (SGMs) [40, 42], have emerged as a dominant class of deep generative techniques, achieving state-of-the-art performance in synthesizing high-fidelity data across various domains, notably images [40, 12, 42, 9]. These models typically define a forward diffusion process, often formulated as a stochastic differential equation (SDE), that gradually corrupts data samples into a simple prior distribution, usually Gaussian noise. A neural network is then trained, often via score-matching objectives [16, 43, 42], to approximate the score function (gradient of the log density) of the perturbed data distributions. This learned score function parameterizes a reverse-time + +![](images/db4c4f42fdea5a83bc323d9568f8252996b7b568e7e821e9ea56defd29b4fd9e.jpg) +(a) + +![](images/189c14bce0a22661635fd5014e654d595e7f63de381c027f1b09bbbf57519285.jpg) +(b) +Figure 2: Left: Schrödinger Bridge Visualization and Samples. Top: Probability flow of the bridge from current rendering $(x_{\mathrm{rnd}})$ to the predicted target $(x_0^{\mathrm{pred}})$ distribution. Bottom: Corresponding image samples, showing the current rendering, intermediate bridge samples $(x_{t}^{i})$ , and the final predicted target. Right: Gradient and Intermediate Rendering Comparison. The first row shows TraCe gradients, the second shows SDS gradients, and the third shows rendered images of the 3D models that have not finished generation. Note the reduced artifacts and potentially more coherent structure in the TraCe gradients and intermediate renderings. + +SDE that transforms samples from the prior back into data samples. While being extremely successful, this standard paradigm typically relies on initiating the generative process from unstructured noise. The Schrödinger Bridge problem provides a more general theoretical framework, originating from statistical physics [38, 39] and connected to entropy-regularized optimal transport [21, 5] and stochastic control [7, 34]. It aims to find the most likely stochastic evolution between two specified arbitrary distributions, $P_A$ and $P_B$ , rather than being restricted to a noise prior. This offers the potential to learn direct transformations between complex data manifolds. Attempts have been made to apply Schrödinger Bridge concepts to text-to-3D generation. For instance, [30] proposes a naive approach to direct Schrödinger Bridge formulation between current renderings and target images guided by text prompts, though this requires an initial stage involving standard SDS. Another approach, DreamFlow [20], proposes to approximate the backward Schrödinger Bridge dynamics between current renderings and target images by simply repurposing a fine-tuned text-to-image model, a heuristic potentially deviating from the true underlying Schrödinger Bridge process. We critically advance text-to-3D generation by establishing the precise theoretical relationship between SDS and Schrödinger Bridges. This foundational insight is then exploited to develop a principled methodology for direct distributional transport, enabling the construction of trajectories towards text-aligned target distributions. + +# 3 Preliminaries + +Score-based Generative Model (SGM) and Schrödinger Bridge. Score-based Generative Models (SGM) [40, 42] learn to generate data by reversing a predefined forward diffusion process. This process gradually transforms data $X_0 \sim p_A$ into noise $X_1 \approx \mathcal{N}(0, I)$ and is often governed by a forward stochastic differential equation (SDE). Generation then proceeds by simulating the corresponding reverse-time SDE [2], starting from $X_1$ and integrating backward to $t = 0$ . The forward and reverse SDEs are given by: + +$$ +\begin{array}{l} d X _ {t} = f _ {t} \left(X _ {t}\right) d t + g _ {t} d W _ {t} (\text {f o r w a r d}) \\ d X _ {t} = \left[ f _ {t} (X _ {t}) - g _ {t} ^ {2} \nabla_ {X _ {t}} \log p (X _ {t}, t) \right] d t + g _ {t} d \bar {W} _ {t} (\mathrm {b a c k w a r d}) \\ \end{array} +$$ + +Here, $W_{t}$ (and $\bar{W}_t$ ) is a standard Wiener process, and $g_{t}$ represents the time-dependent diffusion coefficient. The central part of this reversal is the score function $\nabla_{X_t}\log p(X_t,t)$ , which is unknown and approximated using a time-conditioned neural network $s_{\psi}(X_t,t)$ (or an equivalent noise predictor $\epsilon_{\psi}(X_t,t)$ ). This network is trained using score-matching objectives [43, 42] on pairs $(X_0,X_t)$ sampled from the forward process. Sampling is performed by numerically integrating the reverse SDE using solvers like DDPM [12] or DDIM [41]. + +The Schrödinger Bridge problem [39, 21] offers a generalization of SGMs to learn nonlinear diffusion processes between two arbitrary distributions, $X_0 \sim p_{\mathcal{A}}$ and $X_1 \sim p_{\mathcal{B}}$ . It seeks the most likely stochastic evolution connecting these boundary distributions, described by a pair of forward and backward SDEs: + +$$ +d X _ {t} = \left[ f _ {t} \left(X _ {t}\right) + \beta_ {t} \nabla \Psi \left(X _ {t}, t\right) \right] d t + \sqrt {\beta_ {t}} d W _ {t} (\text {f o r w a r d}) +$$ + +$$ +d X _ {t} = \left[ f _ {t} \left(X _ {t}\right) - \beta_ {t} \nabla \hat {\Psi} \left(X _ {t}, t\right) \right] d t + \sqrt {\beta_ {t}} d \bar {W} _ {t} (\text {b a c k w a r d}) +$$ + +where $\Psi(x,t)$ and $\hat{\Psi}(x,t)$ are non-negative functions known as Schrödinger factors, determined by coupled partial differential equations with boundary conditions $\Psi(x,0)\hat{\Psi}(x,0) = p_A(x)$ and $\Psi(x,1)\hat{\Psi}(x,1) = p_B(x)$ . The forward and backward processes induce the same marginal density $q(x,t)$ at any time $t \in [0,1]$ , satisfying Nelson's duality $\Psi(x,t)\hat{\Psi}(x,t) = q(x,t)$ [33]. Notably, SGM is a special case where $p_B \approx \mathcal{N}(0,I)$ and $\Psi(x,t) \approx 1$ , causing the forward drift modification to vanish and $\hat{\Psi}(x,t) \approx q(x,t)$ , recovering the score function in the reverse SDE. + +Score Distillation Sampling (SDS). Score Distillation Sampling (SDS) [35] enables generating 3D assets by leveraging powerful pre-trained 2D text-to-image diffusion models [36], bypassing the need for large-scale 3D datasets. It optimizes the parameters $\theta$ of a differentiable 3D representation, such as NeRF [31], InstantNGP [32], or 3D Gaussian Splitting (3DGS) [18], using gradients derived from the diffusion model. In this work, we adopt 3DGS primarily for its rapid generation capabilities and high-fidelity visual output. + +The core mechanism of SDS involves repeatedly rendering the 3D model from different views $c(x = g(\theta, c))$ , adding noise to the rendering $x(t)$ , and using the 2D diffusion model's score estimate (denoising prediction $\epsilon_{\mathrm{pred}}$ ) to guide the optimization of $\theta$ . Formally, the gradient is computed as + +$$ +\nabla_ {\theta} \mathcal {L} _ {\mathrm {S D S}} (\theta) = \mathbb {E} _ {t, \epsilon , c} \left[ w (t) \left(\epsilon_ {\text {p r e d}} - \epsilon_ {\text {n o i s e}}\right) \frac {\partial x _ {\text {r m d r}}}{\partial \theta} \right] \tag {1} +$$ + +where $w(t)$ is a weighting factor and the term $(\epsilon_{\mathrm{pred}} - \epsilon_{\mathrm{noise}})$ provides the guidance signal. While SDS can be intuitively understood as moving renderings towards higher-density regions according to the 2D prior or formally interpreted via probability density distillation, the exact nature of its gradient signal is debated [17, 48, 44, 1, 45]. Practically, SDS often requires high classifier-free guidance (CFG) values, which can sometimes lead to artifacts like oversaturation or blur [17, 29, 46, 19, 37, 22]. Furthermore, the strategies that employ lower CFG values, for instance, methods explored in text-to-NeRF [45, 29], have demonstrated limitations when directly applied to the generation of 3D assets with Gaussian Splatting (Figure 4). Recent efforts such as LucidDreamer [23] have investigated text-to-3DGS under low CFG conditions; however, this direction currently faces trade-offs, including prolonged optimization durations (over 5000 iterations) and limitations in the attainable visual quality (Figure 4). Our work builds upon SDS by mitigating these issues through deriving a more direct and tractable optimization path, formulating Schrödinger Bridges to guide the generation process for achieving greater fidelity with lower CFG values. + +# 4 Method + +The preceding analysis of existing methods like Score Distillation Sampling (SDS) raise a natural question: can a principled framework be developed to define a direct, optimal transformation trajectory where both its source and target ends are explicitly and robustly aligned with the desired true distributions? Addressing this challenge—by establishing explicit control over the distributional endpoints of the generative trajectory, rather than relying on unstructured priors (e.g., a Gaussian noise)—is crucial for enhancing the fidelity and control of generative outcomes. To this end, we exploit the theoretical underpinnings of the Schrödinger Bridge problem, particularly its tractable formulations [26, 8], which provide a robust mechanism for learning direct, optimal transport paths between specified distributions. Our methodological contribution unfolds in two stages: first, we theoretically establish that standard SDS is indeed a special case of the Schrödinger Bridge framework, thereby providing a new perspective on its operation (Section 4.1). Second, building upon this insight, we propose a novel optimization algorithm grounded in tractable Schrödinger Bridge principles, to achieve improved distributional alignment throughout the generative process (Section 4.2). + +![](images/ee71589dbb401a1d27cdb136542cbfc59a5dc82b5cf53fb2b33c4d2d955bc18d.jpg) +Figure 3: Overview of Trajectory-Centric Distillation (TraCe). Our TraCe optimizes 3D parameters $\theta$ by computing a distillation gradient with a LoRA-adapted 2D diffusion model, $\epsilon_{\phi}$ . Given a text prompt $y$ and camera parameters $c$ , (1) the current 3D model is rendered in a random view to produce $x_{\mathrm{rndr}}$ . (2) An ideal target view $x_0^{\mathrm{pred}}$ is estimated from $x_{\mathrm{rndr}}$ using a pre-trained diffusion model $\epsilon_{\mathrm{pretrain}}$ via one-step denoising. (3) An intermediate latent $x_{t}$ is sampled from the analytic bridge posterior $q(x_{t} \mid x_{0}^{\mathrm{pred}}, x_{\mathrm{rndr}})$ at time $t$ . (4) The LoRA model $\epsilon_{\phi}$ predicts the noise for $x_{t}$ , and the difference between this prediction and the target noise is computed. (5) This difference directs the calculation of the TraCe gradient $\nabla_{\theta} \mathcal{L}_{\mathrm{TraCe}}$ , and drives the update of LoRA parameters $\phi$ . + +# 4.1 Score Distillation Sampling as a Special Case of Schrödinger Bridges + +In this section, we reformulate the SDS objective by examining its core guidance principles, and show it employs a simplified form of the backward dynamics found in the Schrödinger Bridge framework. + +As established in Section 3, a Score-based Generative Model (SGM) aligns with a special configuration of the Schrödinger Bridge problem. This occurs when the Schrödinger Bridge's distribution $P_B$ at $t = 1$ is Gaussian noise $(P_B \sim \mathcal{N}(0, I))$ and its forward Schrödinger factor $\Psi(x, t) \approx 1$ . Under these conditions, the term $g_t^2 \nabla_{X_t} \log \Psi(X_t, t)$ in the forward Schrödinger Bridge SDE vanishes, causing the forward Schrödinger Bridge dynamics to become identical to the SGM's standard diffusion process. Consequently, the marginal densities $q(X_t, t)$ of this particular Schrödinger Bridge are equivalent to the SGM's noisy marginals $p(X_t, t)$ . + +The crucial step in linking the Schrödinger Bridge and SGM reverse processes from a score perspective lies in Nelson's duality, $\Psi(X_{t}, t) \hat{\Psi}(X_{t}, t) = q(X_{t}, t)$ . Given $\Psi(X_{t}, t) \approx 1$ and $q(X_{t}, t) = p(X_{t}, t)$ for this specific Schrödinger Bridge, the duality simplifies to: + +$$ +1 \cdot \hat {\Psi} (X _ {t}, t) \approx p (X _ {t}, t) \Longrightarrow \hat {\Psi} (X _ {t}, t) \approx p (X _ {t}, t) \tag {2} +$$ + +This directly implies that the score term in the general Schrödinger Bridge backward SDE, $-\nabla_{X_t}\log \hat{\Psi} (X_t,t)$ , becomes $-\nabla_{X_t}\log p(X_t,t)$ . This is precisely the score approximated by the learned network $s_{\psi}(X_t,t)$ (or its equivalent noise predictor $\epsilon (X_t,t)$ ) in an SGM. + +SDS utilizes this learned score $s_{\psi}(X_t, t)$ from a pre-trained SGM to guide the optimization of a differentiable generator $g(\theta)$ . The update for $g(\theta)$ is fundamentally derived from $s_{\phi}(X_t, t)$ , aiming to make the generated samples $x_0 = g(\theta)$ consistent with the data manifold learned by the SGM. + +Therefore, from a score gradient perspective: + +- SDS operates using the score function $s_{\psi}(X_t, t)$ learned by an SGM. +- The derivation above shows that $s_{\psi}(X_t, t)$ (approximating $\nabla_{X_t} \log p(X_t, t)$ ) is equivalent to the score $-\nabla_{X_t} \log \hat{\Psi}(X_t, t)$ of a Schrödinger Bridge under the specific conditions that reduce the Schrödinger Bridge to an SGM. + +Remark. In essence, SDS leverages a score gradient that is equivalent to the score function governing the reverse dynamics of the canonical Schrödinger Bridge implicit in any SGM. While general Schrödinger Bridges can offer more complex dynamics, SDS employs the score from this specific, simplified Schrödinger Bridge structure. Thus, the SDS mechanism represents an application of principles governing a special case of Schrödinger Bridges, distinguished by its reliance on the SGM-derived score $s_{\psi}$ . + +# 4.2 Trajectory-Centric Distillation + +To optimize the 3D model parameters $\theta$ such that current renderings $x_{\mathrm{rndr}} = g(\theta, c)$ align with a target text description $y$ , we propose a novel method, namely Trajectory-Centric Distillation (TraCe). This method leverages a 2D diffusion model, adapted with LoRA parameters $\phi$ denoted as $\epsilon_{\phi}$ , to provide a guiding gradient $\nabla_{\theta} \mathcal{L}_{\mathrm{TraCe}}(\theta)$ . The core idea is to conceptualize a diffusion bridge between the current rendering and an estimated ideal target image. + +Constructing the Diffusion Bridge for Trajectory Guidance. At each optimization step for $\theta$ , we construct a specific diffusion bridge instance defined by two endpoints: + +1. Initial Bridge Endpoint $(X_{1} \gets x_{\mathrm{rnd}})$ : The current rendering $x_{\mathrm{rnd}} = g(\theta, c)$ serves as the starting point of the reverse diffusion trajectory we aim to learn. In the context of our bridge, this is treated as the "noisier" state at bridge time $t = 1$ . +2. Target Bridge Endpoint $(X_0 \gets x_0^{\mathrm{pred}})$ : An estimated ideal target view $x_0^{\mathrm{pred}}$ acts as the desired endpoint at bridge time $t = 0$ . This target is dynamically obtained by performing one-step denoising on $x_{\mathrm{rndr}}$ using a pre-trained text-to-image model $\epsilon_{\mathrm{pretrain}}$ [20], conditioned on the text prompt $y$ : $x_0^{\mathrm{pred}} = (x_{\mathrm{rndr}} - \sqrt{1 - \bar{\alpha}_{t'}} \epsilon_{\mathrm{pretrain}}(x_{\mathrm{rndr}}, t', y)) / \sqrt{\bar{\alpha}_{t'}}$ , where $\bar{\alpha}_{t'}$ is from the noise schedule of $\epsilon_{\mathrm{pretrain}}$ . + +With these two endpoints, $x_0^{\mathrm{pred}}$ and $x_{\mathrm{mdr}}$ , established, we then sample an intermediate latent state $x_t$ along the conceptual bridge. For a sampled time $t \in [0.02, 0.5]$ , following the tractable formulation of Schrödinger Bridges [26], $x_t$ is drawn from the analytically known conditional distribution $x_t \sim q(x_t | x_0^{\mathrm{pred}}, x_{\mathrm{mdr}}) = \mathcal{N}(x_t; \boldsymbol{\mu}_t, \Sigma_t I)$ , where the mean $\boldsymbol{\mu}_t = \gamma_t x_0^{\mathrm{pred}} + (1 - \gamma_t) x_{\mathrm{mdr}}$ is an interpolation between the target image and current rendering, and $\Sigma_t = \sigma_t^2 \bar{\sigma}_t^2 / (\sigma_t^2 + \bar{\sigma}_t^2)$ is the bridge variance. The coefficient $\gamma_t = \bar{\sigma}_t^2 / (\sigma_t^2 + \bar{\sigma}_t^2)$ , and $\sigma_t^2 = \int_0^t \beta_\tau d\tau$ , $\bar{\sigma}_t^2 = \int_t^1 \beta_\tau d\tau$ are accumulated variances from a noise schedule $\beta_t$ specific to this bridge construction. This $x_t$ represents a state on a direct trajectory from $x_0^{\mathrm{pred}}$ being progressively "noised" towards $x_{\mathrm{mdr}}$ (or equivalently, $x_{\mathrm{mdr}}$ being progressively "denoised" towards $x_0^{\mathrm{pred}}$ along this trajectory). + +Optimizing $\theta$ via the Bridge Trajectory. We then optimize $\theta$ using the LoRA-adapted model $\epsilon_{\phi}(x_t, t, y, c)$ , which is trained to predict the noise that would take $x_t$ towards $x_0^{\mathrm{pred}}$ . The objective for $\theta$ utilizes $\epsilon_{\phi}$ to measure the consistency of $x_t$ with respect to $x_{\mathrm{rndr}}$ along this bridge: + +$$ +\nabla_ {\theta} \mathcal {L} _ {\mathrm {T r a C e}} (\theta) = \mathbb {E} _ {\epsilon , t, c} \left[ w (t) \left(\epsilon_ {\phi} \left(x _ {t}, t, y, c\right) - \frac {x _ {t} - x _ {\mathrm {r n d r}}}{\sigma_ {t}}\right) \left(\underbrace {\frac {\partial x _ {0} ^ {\mathrm {p r e d}} \left(x _ {\mathrm {r n d r}} , t , y\right)}{\partial x _ {t}}} _ {\text {U - n e t J a c o b i a n}} \frac {\partial x _ {t}}{\partial x _ {\mathrm {r n d r}}} + 1\right) \frac {\partial x _ {\mathrm {r n d r}}}{\partial \theta} \right] \tag {3} +$$ + +where $x_{\mathrm{mdr}} = g(\theta, c)$ , $t \sim \mathcal{U}[0.02, 0.5]$ , and $y$ is the text prompt. The term $x_{t}$ is sampled from $q(x_{t} \mid x_{0}^{\mathrm{pred}}, x_{\mathrm{mdr}})$ as defined above, and $\sigma_{t} = \sqrt{\int_{0}^{t} \beta_{\tau} d\tau}$ from the bridge's noise schedule. Following the convention of SDS, we omit the U-Net Jacobian term $\left( \frac{\partial x_{0}^{\mathrm{pred}}(\ldots)}{\partial x_{t}} \frac{\partial x_{t}}{\partial x_{\mathrm{mdr}}} + 1 \right)$ for effective training, as it can be treated as a learnable or constant factor absorbed by $w(t)$ . Thus, we have: + +$$ +\nabla_ {\theta} \mathcal {L} _ {\mathrm {T r a C e}} (\theta) = \mathbb {E} _ {\epsilon , t, c} \left[ w (t) \left(\epsilon_ {\phi} \left(x _ {t}, t, y, c\right) - \frac {x _ {t} - x _ {\mathrm {r n d r}}}{\sigma_ {t}}\right) \frac {\partial x _ {\mathrm {r n d r}}}{\partial \theta} \right] \tag {4} +$$ + +Scheduled $t$ -Sampling for Schrödinger Bridges Interpolation. For sampling the intermediate state $x_{t}$ in our TraCe objective (Eq. (4)), which dictates the characteristics of $x_{t} \sim q(x_{t} \mid x_{0}^{\mathrm{pred}}, x_{\mathrm{rndr}})$ , we adopt a $t$ -annealing strategy, similar to the approach proposed in [15]. Throughout the optimization of $\theta$ , the time parameter $t$ is progressively decreased from an initial value near 0.5 towards 0.02. This common annealing technique gradually shifts the focus of the Schrödinger Bridge interpolation from broader states towards those more proximate to the estimated ideal target $x_{0}^{\mathrm{pred}}$ , aiding the progressive refinement of the rendered output $g(\theta, c)$ . + +# 5 Experiments + +![](images/f2f7fadbe165e6b2a42e2ccbeb7064cf223926c26368c6ad9d02b7c71ea5ced2.jpg) +a hermit crab with a colorful shell + +![](images/f83a6aea713e487388a11e440b019c1083ebf1f985c419df4eb683f014e8f11a.jpg) + +![](images/e35f58f244b9048c63af3998af4ddfb1a6b246c842ce0494eb1ab4a0279f9325.jpg) +a golden goblet, low poly + +![](images/fd89b1739055b76eee835aecbad1c8a9de324825d85f9cebcda1b487a15ab123.jpg) + +![](images/b23f59d631f6f846dbec7e5ab503d6ac13afe339ebb427f100d3e621a200aab0.jpg) +CSD +afox + +![](images/f0d3e0445746dc8782de9547098f243f6fb57756fdf790b7209e30a109901265.jpg) +Ours + +![](images/6259b3130f53a78ac23d7f031ac537b354684603bbec743a6121130a3decca0b.jpg) +CSD +A zoomed out DSLR photo of an amigurumi motorcycle + +![](images/1864e4a52846c72cbf396df40dd5a425801415616bd49cf589aef987f3ebccd4.jpg) +Ours + +![](images/fe8384a2cb5dfb905be6b82332dbeed52add618d2ea9fec3b6434bcbb7a879ce.jpg) +VSD +A car made out of sushi + +![](images/647bf2dbfceccf5d4f38badda6cdac57a8e1aa15ee42362c03e4db11d1a95da2.jpg) +Ours + +![](images/abafee86f309274977bcf4a00bdc063320dca5f91924f0d29c9db2291eaafe76.jpg) +VSD +A large, multi-layered, symmetrical wedding cake, with smooth fondant, delicate piping, and lifelike sugar flowers in full bloom, displayed on a silver stand. + +![](images/a949ec80cda0a064c99479fbbb2605eae7bf5a8033d4a1e997762a6c2e60ab53.jpg) +Ours + +![](images/f4283161b0bae0d5f69c3a87e706f70bb866e4154e9332d96563071194ad4206.jpg) +SDI +a blue lobster + +![](images/991068040d3a62dc5a87599353cd1d6e6522186984c7db68d839a6f7eaf129ea.jpg) +Ours + +![](images/37ef9b8c36fa44e4760a4d6139657a384365a663af4574f7c664b553e68dd952.jpg) +SDI +a roast turkey on a white platter + +![](images/0f8b82644c4ae4a2893233a35d268bb6725f8668ea2648c467c2d81e40dda859.jpg) + +![](images/f6dd1a8093da928a3a634594c17ac799cfa043090312a2d7327dd51c7d7834c9.jpg) +ISM + +![](images/254c5ff5273bd63447e78a805b1fa93381334ac3f7e71405595083a63a1d8408.jpg) +Ours +a delicious croissant + +![](images/e51b72b62ce0dcede92a87d0b18b735168ca54547dac1c409ca0fb955d70a5b9.jpg) +ISM + +![](images/0c400548ccef50083499af8802402bf0001ed51cda2d6a0cc0439a7ce765c7a9.jpg) +Ours +a violin +Figure 4: Qualitative comparisons. We present visual examples with the same text prompt. + +Implementation Details. We choose recent state-of-the-art (SOTA) text-to-3D approaches for comparison: NeRF-based methods, such as Classifier Score Distillation (CSD) [48], ProlificDreamer (VSD) [45], and Score Distillation via Inversion (SDI) [29], and 3DGS-based methods like GaussianDreamer (SDS) [47] and LucidDreamer (ISM) [23]. Please see more details and experiments in Supplementary. + +Qualitative Comparisons. Figure 4 presents visual results for several challenging text prompts. Our approach demonstrates the ability to generate higher quality 3D assets compared to other SOTA methods. Compared to SDI [29], our method yields significantly improved texture fidelity. Outputs from CSD [48] often exhibit a characteristic yellowish hue and a less realistic, cartoon-like appearance, which TraCe avoids, producing more natural color rendition and photorealism. When compared against VSD [45], our model better interprets complex textural and stylistic prompts, accurately capturing the text's message and generating a more coherent content. Contrasting with SDS [47], our results exhibit superior sharpness and finer details in both geometry and texture, leading to more visually appealing and realistic outputs. While ISM [23] can produce coherent structures, its outputs often exhibit a stylized, painterly quality; in contrast, our TraCe generates + +results with enhanced photorealism and more natural material appearance. These results demonstrate our method's effectiveness in generating detailed and accurate 3D geometry and appearance from the given text descriptions. + +Quantitative Comparison. We quantitatively evaluate our TraCe against other methods using 83 distinct prompts from Dreamfusion online gallery $^2$ with 120 views per prompt. We benchmark generation quality using CLIP Score (\%), GPTEval3D (Overall) (which leverages GPT-4o for evaluation), and ImageReward. CLIP Scores are evaluated with ViT-L/14, ViT-B/16, and ViT-B/32 backbones. We also assess computational efficiency via processing time (Time) and average peak VRAM (VRAM). As shown in Table 1, the proposed TraCe achieves state-of-the-art generation quality, securing top CLIP Scores across all ViT backbones, e.g., $69.2609 \pm 7.8366\%$ with ViT-L/14. Furthermore, TraCe demonstrates superior performance in advanced perception metrics, yielding the highest GPTEval3D score of 1028.03 and the most favorable (least negative) ImageReward score of $-0.2855 \pm 0.8909$ , indicating enhanced aesthetic quality and semantic alignment. With an average processing time of 14 minutes and an average peak VRAM usage of 18741 MiB, TraCe offers high-fidelity generation with a compelling balance of qualitative performance, computational efficiency, and memory footprint. + +Table 1: Quantitative comparisons. Comparison of different methods on CLIP Score, GPTEval3D Score, ImageReward Score, running time, and VRAM usage. We report mean and standard deviation across 83 prompts and 120 views. + +
MethodCLIP Score (%) ↑GPTEval3D (Overall)↑ImageReward↑TimeVRAM
ViT-L/14ViT-B/16ViT-B/32
SDS [47]68.6146±7.913427.7049±3.700427.5561±3.58931018.09-0.4329±0.912510min18147MiB
CSD [48]68.0282±7.509327.0886±3.734226.5844±3.8703983.04-0.6715±0.748211min19804MiB
VSD [45]67.2697±8.557327.0749±3.967526.9722±3.95631007.49-0.5330±0.892717min26473MiB
ISM [23]69.0093±10.240027.5460±3.681726.9822±3.54951012.37-0.3904±0.950320min10151MiB
SDI [29]63.0409±11.784125.6487±5.254025.5421±5.0903971.98-0.8334±1.039110min16011MiB
TraCe69.2609±7.836627.9334±3.738227.7049±3.86711028.03-0.2855±0.890914min18741MiB
+ +![](images/c0ec5104e7345623164a473106d11cdeb43e5dfbb0ff5276672973b33cc7e2af.jpg) +VSD + +![](images/20c20add59cd046da657948538d615b824294ca74363ad21474aa92f16b007ce.jpg) +CSD + +![](images/118b811a53661eaa06f615e83f1b0e0277086f9900b987ae06d9188e4d9d377c.jpg) +Naive bridge + +![](images/e98b05a3f89d0e8bb9b27f78f6464ce61b5271b19de2c0000ce1f03da2fe1051.jpg) +w/o LoRA & +Scheduled t-Sampling + +![](images/ac8ce56deef8adf1866fd86341afa04bd093a54b8c428b90a16c0bef580a624c.jpg) +Ours w/o +Scheduled t-Sampling +Figure 5: Ablation study on our framework. + +![](images/261aad9b2e3c84a17fdbe36bc208f30e70f0cfeef76fcfa791e5cfa4f645a7a8.jpg) +Ours + +Ablation Study. Figure 5 showcases the ablation study of our TraCe on a fox generation. VSD [45] and CSD [48] exhibit less-desired generation (e.g., missing details). The third column illustrates a naive Schrödinger Bridge approach [30] which attempts to bridge distributions defined by source and target prompts and results in a comparatively smoother, less detailed rendering. The fourth column shows TraCe without LoRA adaptation and without our scheduled $t$ -sampling, where noticeable artifacts such as blue hues on the fur are apparent. Introducing LoRA but omitting the scheduled $t$ -sampling (fifth column) mitigates some artifacts, yet color inconsistencies persist. Finally, our full TraCe method ("Ours")—supported by LoRA-adapted learning of its specific score dynamics and an annealed $t$ -sampling schedule—generates significantly higher-fidelity details in the fur and tail, boosting overall realism compared to other methods (VSD, CSD) and ablated versions. These results highlight the role of our core Schrödinger Bridge formulation: it achieves superior final quality when augmented with these tailored learning components. + +Table 2: ImageReward ablation over LoRA and scheduled $t$ -sampling. + +
Method ConfigurationImageReward (↑)
LoRA off & scheduled t-sampling off-0.4488 ± 0.9964
LoRA off & scheduled t-sampling on-0.3389 ± 0.9721
LoRA on & scheduled t-sampling off-0.4020 ± 1.0019
LoRA on & scheduled t-sampling on (ours)-0.2486 ± 0.8909
+ +We perform an ablation study on our key components, LoRA adaptation and scheduled $t$ -sampling, measuring quality with ImageReward (Table 2). Our full method (-0.2486) significantly outperforms the baseline (both off: -0.4488), as well as enabling only LoRA (-0.4020) or only scheduled $t$ -sampling (-0.3389). The results confirm both components are crucial and demonstrate their strong synergistic effect. + +CFG value. We investigate the impact of the CFG value on our TraCe, as illustrated in Figure 6 with two example objects. While very low CFG values (e.g., 5) yield reduced visual fidelity, TraCe produces high-quality, well-defined results starting at a CFG of approximately 15-20. The visual outcomes are stable and robust within the CFG 15-20 range. Beyond this, at higher CFG values (25-100), results remain largely consistent with minimal further improvement. This demonstrates TraCe's capability to effectively generate high-quality 3D assets at relatively low and stable CFG settings. Furthermore, TraCe's enhanced visual quality is complemented by its robust CLIP score performance within a moderate CFG range (e.g., 10-30) relative to other compared methods, as detailed in Figure ?? + +![](images/59c79348a0dd182fe973676088afac3883262ed6f240982dd7ac4c38b083ce02.jpg) +Figure 6: Different CFG value and generated 3D assets. Prompts are "an overstuffed pastrami sandwich" (top row), "a car made out of sushi" (bottom row). + +# 6 Conclusion + +We introduce Trajectory-Centric Distillation (TraCe), a novel text-to-3D generation framework. Our approach is rooted in a new theoretical understanding of SDS as a specific instance of the Schrödinger Bridge problem. The proposed TraCe explicitly constructs and learns a direct diffusion bridge between current renderings and text-conditioned targets, employing a LoRA-adapted diffusion model to accurately model the bridge's score dynamics. Comprehensive experiments demonstrate TraCe's state-of-the-art performance, yielding 3D assets with superior visual quality and fidelity, notably at lower and more stable Classifier-Free Guidance values than prior methods. These results underscore the benefits of our principled, direct optimization trajectory. We believe TraCe will offer new insights for text-to-3D generation, in terms of efficient and robust trajectory learning for generative models. + +# 7 Acknowledgments + +This work was supported in part by the National Science Foundation of China under Grants (62472375), and in part by the Major Program of National Natural Science Foundation of Zhejiang (LD24F020014, LD25F020002), and in part by the Zhejiang Pioneer (Jianbing) Project (2024C01032), and in part by the Ningbo Yongjiang Talent Programme(2023A-198-G). + +# References + +[1] Thiemo Alldieck, Nikos Kolotouros, and Cristian Sminchisescu. Score distillation sampling with learned manifold corrective. In European Conference on Computer Vision, pages 1-18. Springer, 2024. +[2] Brian DO Anderson. Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12(3):313-326, 1982. +[3] Yuanhao Cai, He Zhang, Kai Zhang, Yixun Liang, Mengwei Ren, Fujun Luan, Qing Liu, Soo Ye Kim, Jianming Zhang, Zhifei Zhang, et al. Baking gaussian splatting into diffusion denoiser for fast and scalable single-stage image-to-3d generation. arXiv preprint arXiv:2411.14384, 2024. +[4] Rui Chen, Yongwei Chen, Ningxin Jiao, and Kui Jia. *Fantasia3d: Disentangling geometry* and appearance for high-quality text-to-3d content creation. In *Proceedings of the IEEE/CVF international conference on computer vision*, pages 22246–22256, 2023. +[5] Yongxin Chen, Tryphon T Georgiou, and Michele Pavon. Stochastic control liaisons: Richard sinkhorn meets gaspard monge on a schrödinger bridge. Siam Review, 63(2):249-313, 2021. +[6] Zilong Chen, Feng Wang, Yikai Wang, and Huaping Liu. Text-to-3d using gaussian splatting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 21401-21412, 2024. +[7] Paolo Dai Pra. A stochastic control approach to reciprocal diffusion processes. Applied mathematics and Optimization, 23(1):313-329, 1991. +[8] Valentin De Bortoli, James Thornton, Jeremy Heng, and Arnaud Doucet. Diffusion schrödinger bridge with applications to score-based generative modeling. Advances in Neural Information Processing Systems, 34:17695-17709, 2021. +[9] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780-8794, 2021. +[10] Rao Fu, Xiao Zhan, Yiwen Chen, Daniel Ritchie, and Srinath Sridhar. Shapecrafter: A recursive text-conditioned 3d shape generation model. Advances in Neural Information Processing Systems, 35:8882-8895, 2022. +[11] Amir Hertz, Kfir Aberman, and Daniel Cohen-Or. Delta denoising score. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2328–2337, 2023. +[12] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. +[13] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. +[14] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuzhhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. *ICLR*, 1(2):3, 2022. +[15] Yukun Huang, Jianan Wang, Yukai Shi, Boshi Tang, Xianbiao Qi, and Lei Zhang. Dreamtime: An improved optimization strategy for diffusion-guided 3d generation. arXiv preprint arXiv:2306.12422, 2023. +[16] Aapo Hyvarinen and Peter Dayan. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4), 2005. +[17] Oren Katzir, Or Patashnik, Daniel Cohen-Or, and Dani Lischinski. Noise-free score distillation. arXiv preprint arXiv:2310.17590, 2023. +[18] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 42(4):139-1, 2023. + +[19] Inhee Lee, Byungjun Kim, and Hanbyul Joo. Guess the unseen: Dynamic 3d scene reconstruction from partial 2d glimpses. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1062-1071, 2024. +[20] Kyungmin Lee, Kihyuk Sohn, and Jinwoo Shin. Dreamflow: High-quality text-to-3d generation by approximating probability flow. arXiv preprint arXiv:2403.14966, 2024. +[21] Christian Léonard. A survey of the schrödinger problem and some of its connections with optimal transport. arXiv preprint arXiv:1308.0215, 2013. +[22] Zongrui Li, Minghui Hu, Qian Zheng, and Xudong Jiang. Connecting consistency distillation to score distillation for text-to-3d generation. In European Conference on Computer Vision, pages 274–291. Springer, 2024. +[23] Yixun Liang, Xin Yang, Jiantao Lin, Haodong Li, Xiaogang Xu, and Yingcong Chen. Luciddreamer: Towards high-fidelity text-to-3d generation via interval score matching. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6517–6526, 2024. +[24] Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3d: High-resolution text-to-3d content creation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 300–309, 2023. +[25] Chenguo Lin, Panwang Pan, Bangbang Yang, Zeming Li, and Yadong Mu. Diffsplat: Repurposing image diffusion models for scalable gaussian splat generation. arXiv preprint arXiv:2501.16764, 2025. +[26] Guan-Horng Liu, Arash Vahdat, De-An Huang, Evangelos A Theodorou, Weili Nie, and Anima Anandkumar. I²sb: Image-to-image schrödinger bridge. arXiv preprint arXiv:2302.05872, 2023. +[27] Jian Liu, Xiaoshui Huang, Tianyu Huang, Lu Chen, Yuenan Hou, Shixiang Tang, Ziwei Liu, Wanli Ouyang, Wangmeng Zuo, Junjun Jiang, et al. A comprehensive survey on 3d content generation. arXiv preprint arXiv:2402.01166, 2024. +[28] Qihao Liu, Yi Zhang, Song Bai, Adam Kortylewski, and Alan Yuille. Direct-3d: Learning direct text-to-3d generation on massive noisy 3d data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6881–6891, 2024. +[29] Artem Lukoianov, Haitz Sáez de Océriz Borde, Kristjan Greenewald, Vitor Guizilini, Timur Bagautdinov, Vincent Sitzmann, and Justin M Solomon. Score distillation via reparametrized ddim. Advances in Neural Information Processing Systems, 37:26011-26044, 2024. +[30] David McAllister, Songwei Ge, Jia-Bin Huang, David Jacobs, Alexei Efros, Aleksander Holynski, and Angjoo Kanazawa. Rethinking score distillation as a bridge between image distributions. Advances in Neural Information Processing Systems, 37:33779-33804, 2024. +[31] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021. +[32] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM transactions on graphics (TOG), 41(4):1-15, 2022. +[33] Edward Nelson. Dynamical theories of Brownian motion, volume 106. Princeton university press, 2020. +[34] Michele Pavon and Anton Wakolbinger. On free energy, stochastic control, and schrödinger processes. In Modeling, Estimation and Control of Systems with Uncertainty: Proceedings of a Conference held in Sopron, Hungary, September 1990, pages 334-348. Springer, 1991. + +[35] Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988, 2022. +[36] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695, 2022. +[37] Seyedmorteza Sadat, Otmar Hilliges, and Romann M Weber. Eliminating oversaturation and artifacts of high guidance scales in diffusion models. In The Thirteenth International Conference on Learning Representations, 2024. +[38] Erwin Schrödinger. Über die umkehrung der natürgesetze. Verlag der Akademie der Wissenschaften in Kommission bei Walter De Gruyter u ..., 1931. +[39] Erwin Schrödinger. Sur la théorie relativiste de l'électron et l'interprétable de la mécanique quantique. In Annales de l'institut Henri Poincaré, volume 2, pages 269-310, 1932. +[40] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, pages 2256-2265. pmlr, 2015. +[41] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020. +[42] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020. +[43] Pascal Vincent. A connection between score matching and denoising autoencoders. Neural computation, 23(7):1661-1674, 2011. +[44] Peihao Wang, Zhiwen Fan, Dejia Xu, Dilin Wang, Sreyas Mohan, Forrest Iandola, Rakesh Ranjan, Yilei Li, Qiang Liu, Zhangyang Wang, et al. Steindreamer: Variance reduction for text-to-3d score distillation via stein identity. arXiv preprint arXiv:2401.00604, 2023. +[45] Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu. Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation. Advances in Neural Information Processing Systems, 36:8406–8441, 2023. +[46] Min Wei, Jingkai Zhou, Junyao Sun, and Xuesong Zhang. Adversarial score distillation: when score distillation meets gan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8131-8141, 2024. +[47] Taoran Yi, Jiemin Fang, Junjie Wang, Guanjun Wu, Lingxi Xie, Xiaopeng Zhang, Wenyu Liu, Qi Tian, and Xinggang Wang. Gaussian dreamer: Fast generation from text to 3d gaussians by bridging 2d and 3d diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6796-6807, 2024. +[48] Xin Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Song-Hai Zhang, and Xiaojuan Qi. Text-to-3d with classifier score distillation. arXiv preprint arXiv:2310.19415, 2023. +[49] Junzhe Zhu, Peiye Zhuang, and Sanmi Koyejo. Hifa: High-fidelity text-to-3d generation with advanced diffusion guidance. arXiv preprint arXiv:2305.18766, 2023. + +# NeurIPS Paper Checklist + +The checklist is designed to encourage best practices for responsible machine learning research, addressing issues of reproducibility, transparency, research ethics, and societal impact. Do not remove the checklist: The papers not including the checklist will be desk rejected. The checklist should follow the references and follow the (optional) supplemental material. The checklist does NOT count towards the page limit. + +Please read the checklist guidelines carefully for information on how to answer these questions. For each question in the checklist: + +- You should answer [Yes], [No], or [NA]. +- [NA] means either that the question is Not Applicable for that particular paper or the relevant information is Not Available. +- Please provide a short (1–2 sentence) justification right after your answer (even for NA). + +The checklist answers are an integral part of your paper submission. They are visible to the reviewers, area chairs, senior area chairs, and ethics reviewers. You will be asked to also include it (after eventual revisions) with the final version of your paper, and its final version will be published with the paper. + +The reviewers of your paper will be asked to use the checklist as one of the factors in their evaluation. While "[Yes]" is generally preferable to "[No]", it is perfectly acceptable to answer "[No]" provided a proper justification is given (e.g., "error bars are not reported because it would be too computationally expensive" or "we were unable to find the license for the dataset we used"). In general, answering "[No]" or "[NA]" is not grounds for rejection. While the questions are phrased in a binary way, we acknowledge that the true answer is often more nuanced, so please just use your best judgment and write a justification to elaborate. All supporting evidence can appear either in the main paper or the supplemental material, provided in appendix. If you answer [Yes] to a question, in the justification please point to the section(s) where related material for the question can be found. + +IMPORTANT, please: + +- Delete this instruction block, but keep the section heading "NeurIPS Paper Checklist". +- Keep the checklist subsection headings, questions/answers and guidelines below. +- Do not modify the questions and only use the provided macros for your answers. + +# 1. Claims + +Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? + +Answer: [Yes] + +Justification: We establish a novel theoretical connection, demonstrating that SDS can be precisely understood as a specific case of the Schrödinger Bridge framework. Experiments demonstrate that our TraCe achieves high-quality 3D generation, surpassing current state-of-the-art techniques. See the abstract and the end of Section 1. + +Guidelines: + +- The answer NA means that the abstract and introduction do not include the claims made in the paper. +- The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. +- The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. +- It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. + +# 2. Limitations + +Question: Does the paper discuss the limitations of the work performed by the authors? + +Answer: [Yes] + +Justification: See Supplementary. + +Guidelines: + +- The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. +- The authors are encouraged to create a separate "Limitations" section in their paper. +- The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. +- The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. +- The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. +- The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. +- If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. +- While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. + +# 3. Theory assumptions and proofs + +Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? + +Answer: [Yes] + +Justification: See Section 4. + +Guidelines: + +- The answer NA means that the paper does not include theoretical results. +- All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced. +- All assumptions should be clearly stated or referenced in the statement of any theorems. +- The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. +- Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. +- Theorems and Lemmas that the proof relies upon should be properly referenced. + +# 4. Experimental result reproducibility + +Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? + +Answer: [Yes] + +Justification: We disclose the experimental settings to reproduce the main experimental results in our paper in Supplementary and the settings of all compared methods in Section 5. + +# Guidelines: + +- The answer NA means that the paper does not include experiments. +- If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. +- If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. +- Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. +- While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example +(a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. +(b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. +(c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). +(d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. + +# 5. Open access to data and code + +Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? + +Answer: [Yes] + +Justification: Our code will be released to the community. + +Guidelines: + +- The answer NA means that paper does not include experiments requiring code. +- Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details. +- While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). +- The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details. +- The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. +- The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. +- At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). + +- Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. + +# 6. Experimental setting/details + +Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? + +Answer: [Yes] + +Justification: We provide the optimization and train/test details of our proposed method in Supplementary. + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. +- The full details can be provided either with the code, in appendix, or as supplemental material. + +# 7. Experiment statistical significance + +Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? + +Answer: [Yes] + +Justification: Ours reports the results of multiple rounds of the experiment, reflecting the statistics of the experiments. + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. +- The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). +- The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) +- The assumptions made should be given (e.g., Normally distributed errors). +- It should be clear whether the error bar is the standard deviation or the standard error of the mean. +- It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified. +- For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). +- If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. + +# 8. Experiments compute resources + +Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? + +Answer: [Yes] + +Justification: See Section 5. + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. + +- The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. +- The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). + +# 9. Code of ethics + +Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines? + +Answer: [Yes] + +Justification: See Supplementary. + +Guidelines: + +- The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. +- If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. +- The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). + +# 10. Broader impacts + +Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? + +Answer: [Yes] + +Justification: See Supplementary. + +Guidelines: + +- The answer NA means that there is no societal impact of the work performed. +- If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. +- Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. +- The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. +- The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. +- If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). + +# 11. Safeguards + +Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? + +Answer: [NA] + +Justification: Our paper poses no such risks. + +Guidelines: + +- The answer NA means that the paper poses no such risks. + +- Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. +- Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. +- We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. + +# 12. Licenses for existing assets + +Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? + +Answer: [Yes] + +Justification: The assets used in the paper are properly credited, and we respect the license and terms of use of these assets throughout our research procedures. + +Guidelines: + +- The answer NA means that the paper does not use existing assets. +- The authors should cite the original paper that produced the code package or dataset. +- The authors should state which version of the asset is used and, if possible, include a URL. +- The name of the license (e.g., CC-BY 4.0) should be included for each asset. +- For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. +- If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. +- For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. +- If this information is not available online, the authors are encouraged to reach out to the asset's creators. + +# 13. New assets + +Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? + +Answer: [NA] + +Justification: Our paper does not release new assets. + +Guidelines: + +- The answer NA means that the paper does not release new assets. +- Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. +- The paper should discuss whether and how consent was obtained from people whose asset is used. +- At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. + +# 14. Crowdsourcing and research with human subjects + +Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? + +Answer: [Yes] + +Justification: See Supplementary. + +Guidelines: + +- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. +- Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. +- According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. + +# 15. Institutional review board (IRB) approvals or equivalent for research with human subjects + +Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? + +Answer: [Yes] + +Justification: See Supplementary. + +Guidelines: + +- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. +- Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. +- We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. +- For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. + +# 16. Declaration of LLM usage + +Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigourousness, or originality of the research, declaration is not required. + +Answer: [NA] + +Justification: Our core method development in this research does not involve LLMs as any important, original, or non-standard components. + +Guidelines: + +- The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components. +- Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described. \ No newline at end of file diff --git "a/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/images.zip" "b/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/images.zip" new file mode 100644 index 0000000000000000000000000000000000000000..b7f0a31233893c66f9ee83faa727ce11778b5844 --- /dev/null +++ "b/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/images.zip" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2044dcae97c5eeb81ec52f3ff33cfa830f0590d36a0849743971eeda9f6aee4 +size 530295 diff --git "a/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/layout.json" "b/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/layout.json" new file mode 100644 index 0000000000000000000000000000000000000000..851e12affccc4f5358a9d6186fa0e3348d81943a --- /dev/null +++ "b/NeurIPS/2025/Walking the Schr\303\266dinger Bridge_ A Direct Trajectory for Text-to-3D Generation/layout.json" @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:949f579393db86d38fa6f824a0e4ffac4e8dda0fa93ddcd1d26b425fc9898d45 +size 764093 diff --git a/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/ac5eb2f1-66f9-4927-a844-c392b6834faf_content_list.json b/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/ac5eb2f1-66f9-4927-a844-c392b6834faf_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..58d274cdf390701e18f74ce0b8bd939f1bc049fb --- /dev/null +++ b/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/ac5eb2f1-66f9-4927-a844-c392b6834faf_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9987a5fcffc39ef3ae07a92a69a8e52e7a1d41d1a6ff6fc1ed70ee380cbae3f +size 164499 diff --git a/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/ac5eb2f1-66f9-4927-a844-c392b6834faf_model.json b/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/ac5eb2f1-66f9-4927-a844-c392b6834faf_model.json new file mode 100644 index 0000000000000000000000000000000000000000..98636ab19fd2fc4a973ec18a2ab1d99cfc925330 --- /dev/null +++ b/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/ac5eb2f1-66f9-4927-a844-c392b6834faf_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f236bdec0e2b9d15064749b8b58d1fa810ae24425be12b1ad89b346044c416d +size 216997 diff --git a/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/ac5eb2f1-66f9-4927-a844-c392b6834faf_origin.pdf b/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/ac5eb2f1-66f9-4927-a844-c392b6834faf_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9046c5c0e4da081f6549aa32ddda8fa3d8715bdd --- /dev/null +++ b/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/ac5eb2f1-66f9-4927-a844-c392b6834faf_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16630756521389aecdbebddd3ee56c82965e670940915787f8391e3836ff7dc5 +size 7298285 diff --git a/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/full.md b/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..210ed4938fb2a3120104b8379e4af4d677f24dc5 --- /dev/null +++ b/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/full.md @@ -0,0 +1,796 @@ +# Walking the Tightrope: Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning + +Xiaoyu Yang, Jie Lu, En Yu + +Australian Artificial Intelligence Institute (AAII), + +Faulty of Engineering and Information Technology, + +University of Technology Sydney, Australia. + +xiaoyu.yang-3@student.uts.edu.au; {jie.lu,en.yu-1}@uts.edu.au + +# Abstract + +This paper uncovers a critical yet overlooked phenomenon in multi-modal large language models (MLLMs), especially for chest diagnosis: detrimental concept drift within chain-of-thought (CoT) reasoning during non-stationary reinforcement fine-tuning (RFT), where reasoning token distributions evolve unpredictably, thereby introducing significant biases in final predictions. To address this, we are pioneers in establishing the theoretical bridge between concept drift theory and RFT processes by formalizing CoT's autoregressive token streams as non-stationary distributions undergoing arbitrary temporal shifts. Leveraging this framework, we propose a novel autonomous counterfact-aware RFT that systematically decouples beneficial distribution adaptation from harmful concept drift through concept graph-empowered LLM experts generating counterfactual reasoning trajectories. Our solution, Counterfactual Preference Optimization (CPO), enables autonomous and stable RFT in non-stationary environments, particularly within the medical domain, through custom-tuning of counterfactual-aware preference alignment. Extensive experiments demonstrate our superior performance of robustness, generalization and coordination within RFT. Besides, we also contribute a large-scale dataset CXR-CounterFact (CCF), comprising 320,416 meticulously curated counterfactual reasoning trajectories derived from MIMIC-CXR. Our code and data are public at: https://github.com/XiaoyuYoung/CPO. + +# 1 Introduction + +Reinforcement Fine-Tuning (RFT) [1, 2] has emerged as a promising paradigm for domain-specific customization of multi-modal large language models (MLLMs) [3-5], demonstrating remarkable capability in facilitating efficient domain shift with minimal data requirements, particularly for medical downstream tasks. However, the reinforcement-driven custom-tuning is fundamentally challenged by non-stationary environmental dynamics, especially for inherent domain-specific data characteristics such as long-tailed distributions in medical diagnosis, and systemic data imperfections including noise and sparsity. This complex synergy induces latent concept drift that progressively disaligns the model's representation space from domain reality, culminating in catastrophic error propagation that particularly jeopardizes the reliability of MLLMs in safety-critical applications like radiology report generation. + +Concept drift theory [6, 7] provides a new perspective for analyzing the domain shift of RFT in nonstationary custom-tuning, which focuses on the unpredictable distribution changes in data streams. + +We posit that the autoregressive decoding paradigm inherent to MLLMs can be characterized as a sequential token stream generation process. Within this framework, each token generation step propagates through the model's internal reasoning pathways, which remain opaque to external observation, while manifesting inherent stochasticity in the evolving token probability distributions across successive decoding iterations. + +Within the concept drift framework, our analysis reveals critical limitations in reinforcement fine-tuning approaches that depend on verifiable rewards in chain-of-thought (CoT) [2]: + +Observation 1.1. Specifically, while RFT operates through optimal reasoning pathway selection to maximize outcome certainty, we empirically observe that MLLM-generated CoT processes in specialized domains frequently demonstrate susceptibility to concept drift. This progressive deviation in intermediate reasoning ultimately induces substantial output divergence in non-stationary environments. + +Intuitively, we provide a representative case study that demonstrates this phenomenon in clinical reasoning contexts as presented in Fig.1. When diagnosing chest DR images, the model generates a reasoning trajectory containing the statement: "Asymmetric lung opacity in the right middle lobe is concerning for pneumonia." We found that in the token-level probability, "lung opacity" shows negligible differentiation from its ambiguous counterpart "opacity". Despite this minimal probabilistic disparity in the thinking process, our diagnostic outcome distribution analysis presents a radical divergence in final predictions as exhibited in Fig.1. In particular, the diagnosis is completely opposite in terms of atelectasis, cardiomegaly and pneumonia. + +Therefore, summarizing the above challenges of reinforced fine-tuning in MLLMs, it raises the important question: + +# How to adapt thinking process to concept drift under non-stationary reinforced custom-tuning? + +Inspired by causal inference [8-10], we develop Counterfactual Preference Optimization (CPO), a principled approach that systematically perturbs reasoning trajectories to discriminate between beneficial distribution adaptation and detrimental concept drift. + +Firstly, we construct a hierarchical concept graph that codifies domain-specific knowledge structures through triadic relation embeddings, including positive correlation, irrelevance, and opposition. Subsequently, we structurally embed the hierarchically structured concept graph into the LLM's reasoning architecture as an expert-guided module, + +automatically generating semantically-constrained counterfactual inference paths. Consequently, during reinforced custom-tuning, we formulate a dual-preference optimization objective that jointly maximizes likelihood alignment with human preferences while minimizing similarity to adversarially generated counterfactual paths, thus achieving decoupling of beneficial domain adaptation from detrimental concept drift. Finally, we contribute CXR-CounterFact (CCF), the chest diagnosis preference dataset comprising 320,416 fine-curated counterfactual reasoning trajectories derived from MIMIC-CXR [11] radiologic findings, aiming to validate our method and catalyze research advancements in counterfactual-aware reinforcement fine-tuning paradigms + +In summary, our paper mainly makes the following contributions: + +![](images/d0c1deb12670662a51dd21d02eac47b5e635ced04156dd4fd4fc5627562fd344.jpg) +Figure 1: Concept Drift in RFT's reasoning for chest diagnosis. Despite analogous occurrence probabilities of "lung opacity" (in red) and "opacity" (in blue) tokens during the CoT, nonstationarity induces significant bad distributional drift in clinical conclusions, especially the opposite diagnosis of atelectasis, cardiomegaly and pneumonia. + +1. First, we establish a novel theoretical framework that formalizes autoregressive token generation in MLLMs through the lens of concept drift theory, enabling systematic identification and causal analysis of detrimental reasoning divergence during non-stationary reinforced custom-tuning. + +2. Second, we propose Counterfactual Preference Optimization (CPO), which synergises structured domain-specific knowledge with systematic counterfactual intervention, driving the MLLMs with preference-aligned reinforcement learning. By embedding learnable concept graphs as the expert and generating adversarially-constrained reasoning trajectories, our approach achieves substantial decoupling between beneficial distribution adaptation and detrimental concept drift. +3. Third, we conduct comprehensive empirical validation across various clinical benchmarks for chest radiograph, including disease classification, diagnostic report generation and zero-shot generalization. The superior results demonstrate statistically significant improvements in robustness, generalization, and accuracy of our method under non-stationary custom-tuning. Besides, we also provide ablation experiments to validate the effectiveness of various modules. +4. As a pioneer contribution to the community, we introduce CXR-CounterFact (CCF), a large-scale dataset comprising 320,416 meticulously curated counterfactual reasoning trajectories derived from MIMIC-CXR. + +# 2 Methodology + +![](images/49017601417e3d837e1eef869d53453c51b4aeb829ad272d008b3e9acd939fd1.jpg) + +![](images/24376e57f387caf2fdb1ae9599b6cb1c91e7fadc782e3154a3616b040959b118.jpg) +(b) Concept Graph for Counterfactual Cause + +![](images/e599196ac08ca5e55a5866e677879fccaf9f6fb149d6add27a18d00fa1267f94.jpg) +(a) Concept Drift Behind the Thinking of MLLMs +(c) Counterfactual Preference Optimization +Figure 2: The main contributions of our methods. (a) By formalizing autoregressive CoT generation as a stream of next-token prediction actions under the theoretical lens of concept drift, we reveal that even minor perturbations in reinforced fine-tuning can induce unpredictable distributional changes of final predicted results. (b) To disentangle detrimental drift, we introduce the concept graph that generates radiologically plausible counterfactual CoTs through controlled attribute perturbations. Green lines represent attributes that are positively correlated with the disease, while red denote they are exclusive. (c) We propose counterfactual preference optimization to drive the reinforced custom-tuning of MLLMs, enabling generalized CoT reasoning in non-stationary environments through disentanglement of beneficial domain adaptation from spurious concept drift, thereby achieving robust human-aligned decision-making via preference distillation. + +# 2.1 Underlying Concept Drift Behind Thinking + +In this section, we first extend the concept drift theory to reinforced custom-tuning, highlighting the phenomenon wherein the distributional characteristics of targets undergo arbitrary changes over the course of the thinking process. Operating through recursive on-policy sampling, the MLLM $\pi$ + +autoregressively generates the token at position $j$ in the reasoning chain, conditioned on both the visual input image $v$ , the textual prompt $l$ , and the partial token sequence $t_{. While it produces the token, the resulting state $s_{L+1}$ is the terminal state, thereby concluding one + +# Ground Truth + +# Findings: + +Moderate cardiomegaly is increased. No focal consolidation or pneumothorax. There is a slight blunting of the costophrenic angles, which may indicate small pleural effusion or scarring. There is increased density at the perihilar regions which may indicate pulmonary vascular congestion. + +# Diagnosis: + +The disease of this patient is Cardiomegaly. + +Generated Counterfactual Reasoning of Pneumonia. + +# Findings: + +PA and lateral views of the chest show moderate cardiomegaly. Focal consolidation is noted in the right lower lobe with accompanying bronchial airspace opacification. No pneumothorax is observed. Slight blunting of the costophrenic angles suggests the presence of a small pleural effusion. Increased density at the perihilar regions indicates pulmonary vascular congestion, but also suggests possible pneumonia. + +# Diagnosis: + +The disease of this patient is Pneumonia. + +Table 1: Example to illustrate the generated counterfactual diagnosis. Ground Truth denotes the original diagnosis report from MIMIC-CXR, and the bottom is the counterfactual report designed for pneumonia. Underline indicates generated counterfactual diagnostic attributes through controlled perturbation while maintaining radiological plausibility, leading to the counterfactual diagnosis. + +chain-of-thought generation process. We regard the report issued by the professional doctors as the chain-of-thought preferred by humans, which are positive samples. As for negative samples, we generate counterfactual CoT for diagnostic report instances as our negative samples, that is, perturbing specific radiological features to interfere with the diagnosis results according to our proposed concept graph. Thereby, the chain-of-thought preferred by humans is considered to be $t^{+}$ , namely the diagnosis report stemming from the radiologist, while the generated counterfactual CoT is represented by $t^{-}$ . + +Consequently, following the DPO [15], we derive the optimal policy that maximizes the reward function through: + +$$ +\pi_ {\theta} (t | v, l) \propto \pi_ {\mathrm {r e f}} (t | v, l) \exp \left(\frac {r (v , l , t)}{\beta}\right) \tag {6} +$$ + +where $\beta$ is a parameter controlling the deviation from the base reference policy $\pi_{\mathrm{ref}}$ , namely the initial supervised fine-tuned (SFT) model, and $\pi_{\theta}$ denotes the fine-tuning model. With counterfactual effect in Eq.4, the reward difference between human-preferred positive samples and counterfactual samples can be defined as: + +$$ +r (v, l, t ^ {+}) - r (v, l, t ^ {-}) = \beta \left[ \log \frac {\pi_ {\theta} \left(t ^ {+} | v , l\right)}{\pi_ {\text {r e f}} \left(t ^ {+} | v , l\right)} - \log \frac {\pi_ {\theta} \left(t ^ {-} | v , l\right)}{\pi_ {\text {r e f}} \left(t ^ {-} | v , l\right)} \right] \tag {7} +$$ + +Thus, based on the Bradley-Terry model, the counterfactual preference optimization (CPO) is driving the reinforced custom-tuning of the MLLMs through the maximum likelihood objective: + +$$ +\mathcal {L} _ {\mathrm {C P O}} \left(\pi_ {\theta}; \pi_ {\mathrm {r e f}}\right) = - \mathbb {E} _ {(v, l, t ^ {+}, t ^ {-})} \left[ \log \sigma \left(\beta \log \frac {\pi_ {\theta} (t ^ {+} | v , l)}{\pi_ {\mathrm {r e f}} (t ^ {+} | v , l)} - \beta \log \frac {\pi_ {\theta} (t ^ {-} | v , l)}{\pi_ {\mathrm {r e f}} (t ^ {-} | v , l)}\right) \right] \tag {8} +$$ + +In this context, it culminates in counterfactual reinforced custom-tuning, an adaptive framework that effectively differentiates between advantageous domain adaptation and harmful concept drift in non-stationary environments, achieving equilibrium preservation through causal intervention and dynamic policy reinforcement, walking the tightrope. + +# 2.5 Building CXR-CounterFact Dataset for Clinical Reasoning Chains + +Since we are pioneers in introducing counterfactual cause into reinforced custom-tuning of MLLMs, we are deeply aware of the scarcity of counterfactual CoT in downstream tasks, especially in the highly professional medical field. Thus, our aspiration is for the model to adeptly acclimate to the concept drift by itself, acquiring abundant knowledge with more and more data, but not exhibiting bias. + +In this context, a more realistic training dataset for multi-modal large language models is required to validate their potential to be trained under the non-stationary reinforced custom-tuning. Recognizing the demand for higher-quality multi-modal data with CoT, we develop a datasets called CXR-CounterFact Dataset (CCF), extending the MIMIC-CXR[11] with counterfactual chain-of-thought. This novel dataset introduces 320,416 meticulously curated counterfactual pairs spanning 14 thoracic pathologies, establishing a pioneering large-scale benchmark for causal interpretation in clinical chest X-ray analysis. More details are given in Appendix B. + +# 3 Experiments + +In this section, we verify the robustness, generalization and coordination of our proposed counterfactual preference optimization in reinforced custom-tuning in non-stationary environments. + +MIMIC-CXR[11] is utilized to train the MLLMs via reinforced custom-tuning for domain adaptation, which presents 371,920 chest X-rays associated with 227,943 imaging studies from 65,079 patients. And images are provided with 14 labels with corresponding free-text radiology reports, namely Atelectasis (Ate.), Cardiomegaly (Car.), Consolidation (Con.), Edema (Ede.), Enlarged Cardiomediastinum (ECM), Fracture (Fra.), Lung Lesion (LL), Lung Opacity (LO), Pleural Effusion (PE), Pneumonia (Pna.), Pneumothorax (Pnx.), Pleural Other (PO), Support Devices (SD) and No Finding (NF). + +We selected the MIMIC-CXR [11] dataset not only for its well-established benchmark enabling rigorous performance evaluation on real-world downstream medical tasks, but also due to its authentic clinical representation that exhibits inherent non-stationarity, particularly long-tail and + +![](images/ed41aa7e6a97be4ad6717a12a9a0f01ba2e77338a6779c3923a6332c3d44b9f6.jpg) +Figure 4: Non-stationarity of MIMIC-CXR with its percentage of diseases. Blue signifies patients with clinically confirmed diagnoses showing the long-tailed characteristic, while red demarcates suspected cases emphasizing the inherent uncertainty within medicine. + +diagnostic ambiguity. As illustrated in Fig.4, the statistical profiling of 14 thoracic pathologies in MIMIC-CXR reveals dual clinical characteristics: the number of confirmed diseases showed a clear long-tail distribution, and each disease had a large number of uncertain patients. Beyond that, we found that $40.87\%$ of the patients suffered from two or more diseases, with nearly $19.97\%$ experiencing three or more. They all reflect the non-stationary environments of our experimental setup within MIMIC-CXR as the training dataset for reinforced custom-tuning. + +In terms of the model, we employ Qwen2.5-VL (7B) [16] to perform supervised fine-tuning (SFT) and reinforced fine-tuning (RFT), cascadedly. And they only train one epoch with a batch size of 2. + +More detailed experimental implementations are given in Appendix C. + +# 3.1 Taming the Non-stationary Custom-Tuning + +First, to explicitly demonstrate the superior performance of our proposed method in non-stationary environments, especially in robustness, we compare it with other models on MS-CXR-T [17], where instances are chosen from the public MIMIC-CXR. As exhibited in Table 2, our counterfactual preference optimization approach achieve the superior overall performance of $81.8\%$ , surpassing the second CoCa-CXR [18] nearly $12.6\%$ . It demonstrates the robustness of our approach to reinforced fine-tuning in non-stationary environments. While our approach trails TempA-VLP [19] by $1.8\%$ on edema (Ede.) detection, we argue that this performance gap emerges from the utilization of additional annotations from Chest ImaGenome [20] in addition to standard MIMIC-CXR. In terms of the pneumothorax (Pnx.), SFT has achieved a high result of $95.9\%$ , so the slight decrease in CPO does not affect the overall performance of the model. + +Notably, supervised fine-tuning (SFT) exhibits suboptimal performance on the clinically correlated diseases, namely consolidation (Con.) and pneumonia (Pna.), as presented in Table 1 of Section.2.3. It empirically validates our Observation 1.1 that the inherent concept drift in disease attribute representation within chain-of-thought reasoning introduces systematic prediction biases. + +Beyond that, the substantial performance gains of $22.8\%$ and $17.4\%$ in CPO for consolidation (Con.) and pneumonia (Pna.), respectively, underscore our core contribution, which disentangles the concept drift of CoT in reinforcement learning under non-stationary custom-tuning, achieving robust reasoning. + +In terms of DPO[15], while our proposed CPO and DPO share a preference-style optimization framework, CPO is fundamentally distinct. DPO contrasts human-preferred with dispreferred responses, while CPO contrasts factually with counterfactuals generated under explicit causal interventions, specifically designed to isolate the causal effect. Crucially, as shown in Table 2, our proposed CPO is significantly superior to DPO with the same number of training data, proving they are not functionally equivalent. Besides, We also tried to + +use GRPO [25] to train Qwen2.5-v1 and DeepSeek-VL2 on MIMIC-CXR, but both encountered reward collapse that led to failed training. Inspired by DAPO paper[26], we attribute this to GRPO's reliance on sparse final-answer rewards, where suboptimal reward assignment obscures high-quality samples. In contrast, CPO provides denser causal trajectories, significantly improving generalization and robustness by operating controlled counterfactual interventions. + +
VenueCon.PEPna.Pnx.Ede.Avg.
CTrans [21]CVPR’2344.061.345.131.565.549.5
CheXRelNet [22]MICCAI’2247.047.047.036.049.045.2
BioViL [23]ECCV’2256.063.060.242.567.557.8
BioViL-T [21]CVPR’2361.167.061.942.668.560.2
Med-ST [24]ICML’2460.667.458.565.054.261.1
TempA-VLP [19]WACV’2565.259.473.443.177.163.6
CoCa-CXR [18]Arxiv’2570.469.661.472.871.869.2
SFT54.971.770.095.976.573.8
DPOThis paper63.272.476.793.576.376.4
CPO77.772.787.495.875.381.8
+ +# 3.2 Concept Drift-Aware CoT for Accurate Reasoning + +Table 2: Evaluation results of multi-label chest diseases classification on MS-CXR-T. Top-1 accuracy is applied to evaluate the performance of different methods. The best-performing models are highlighted in red, with the second-best in blue. SFT denotes the results of supervised fine-tuning, and DPO indicates the direct preference optimization with random negative samples, while the CPO represents our counterfactual preference optimization method. + +
VenueBLEU-1BLEU-2BLEU-3BLEU-4ROUGE-LMETEORCIDEr
R2Gen [27]EMNLP'200.3530.2180.1450.1030.2770.142-
PPKED [28]CVPR'210.3600.2240.1490.1060.2840.1490.237
AlignTrans [29]MICCAI'20.3780.2350.1560.1120.2830.158-
CMCL [30]ACL'210.3440.2170.140.0970.2810.133-
Clinical-BERT [31]AAAI'220.3830.2300.1510.1060.2750.1440.151
METransformer [32]CVPR'230.3860.2500.1690.1240.2910.1520.362
DCL [33]CVPR'23---0.1090.2840.1500.281
R2GenGPT [34]MetaRad'230.4080.2560.1740.1250.2850.1670.244
PromptMRG [35]AAAI'240.398--0.1120.2680.157-
BtspLLM [36]AAAI'240.4020.2620.180.1280.2910.175-
MambaXray [37]Arxiv'240.4220.2680.1840.1330.2890.1670.241
CPOThis paper0.4260.2880.1860.1550.3210.2360.375
+ +Table 3: Evaluation results of diagnostic report generation on MIMIC-CXR with various metrics including BLEU-1/-2/-3/-4, ROUGE-L, METEOR and CIDEr. The best-performing models are highlighted in red, with the second-best in blue. + +Beyond the classification, we verify our main contribution of accurate reasoning, preserving the beneficial CoT within domain adaptation, while eliminating harmful concept drift. As exhibited in the Table 3, the experiments of diagnostic report generation on MIMIC-CXR are conducted to assess the performance of the thinking in our proposed model with chain-of-thought. Our evaluation + +combines multiple metrics: BLEU evaluates terminology accuracy with higher-order scores indicating logical coherence in clinical reasoning, ROUGE-L assesses completeness through narrative alignment, METEOR enables synonym-aware lexical matching, and CIDEr prioritizes clinical details via corpus-informed weighting. + +The experimental findings demonstrate that our reasoning framework achieves prominent performance across all evaluation metrics, with particularly notable improvements in BLEU-4 (16.5% improvement), ROUGE-L (10.3% increase) and METEOR (34.8% enhancement) scores, indicating the coherence, completeness, and professionalism of our model's thinking. We attribute it to the enhanced fidelity of our reasoning chains in combating concept drift during non-stationary reinforcement learning processes. These enhancements reveal that our method's superior accuracy stems from its capacity to maintain coherent reasoning pathways even when faced with dynamically shifting environmental parameters of complex RL scenarios. + +# 3.3 Generalized Reinforced Custom-tuning + +
MethodOpen-IPadChestPadChest20ChestXray14ChestXpertChestXDet10
MedCLIP [38]55.150.850.156.474.457.1
BiomedCLIP [39]57.751.351.063.967.763.0
GLoRIA [40]58.956.555.861.075.064.5
BioViL [21]70.265.560.872.978.970.8
CheXzero [41]75.962.968.872.687.971.3
MedKLIP [42]75.962.968.872.687.971.3
KAD [43]80.775.073.578.990.573.5
CARZero [44]83.881.083.781.192.379.6
CPO84.482.085.181.792.580.1
+ +Furthermore, we validated the generalization of our model on downstream tasks with zero-shot multi-label classification across six different benchmarks, as presented in Table 4. Experimental results demonstrate that our CPO-driven MLLMs achieve zero-shot superiority over the second-best baseline CARZero [44] across all benchmark datasets, underscoring our remarkable robustness and generalization capabilities even when trained under non-stationary environmental regimes. + +# 3.4 Ablation Study on Inherent Compatibility of CPO and CoT: Two Peas in a Pod + +Moreover, we conduct ablation experiments on MIMIC-CXR to validate the feasibility and coordination of the chain-of-thought (CPO) and counterfactual preference optimization (CPO) within reinforced fine-tuning (RFT) in non-stationary environments, as presented in Table 5. Among them, only CoT without CPO in RFT represents the utilization of direct preference optimization (DPO) [15] to drive MLLMs for reinforcement learning. While, the only CPO in RFT denotes the reinforced fine-tuning only applies the diagnosis results without the thinking process during the training. + +Table 4: Evaluation results of zero-shot diseases classification on Open-I[45], PadChest[46], PadChest20 [46], ChestXray14 [47], ChestXpert [48] and ChestXDet10 [49]. AUC is applied to evaluate the performance of different methods. The best-performing models are highlighted in red, with the second-best in blue. + +
SFTRFTCon.PEPna.Pnx.Ede.Avg.
CoTCPO
--54.971.770.095.976.573.8
-58.471.275.094.475.574.9
-70.572.777.395.275.878.3
77.772.787.495.875.381.8
+ +Table 5: Ablation evaluation results on chain-of-thought (CoT) and counterfactual preference (CPO) within reinforced fine-tuning (RFT) on MIMIC-CXR, where all RFT stages follow the supervised fine-tuning (SFT). The $\checkmark$ denotes that the results are trained with the corresponding module. The results are based on the test split of the MS-CXR-T, with Top-1 accuracy (Acc) as the metric. + +The experimental analysis reveals CPO's superior performance gain in reinforcement learning (4.5% vs. CoT's 1.1%). We argue + +that it is mainly attributable to its mechanism of introducing causally attributed negative samples that enable decision boundary refinement in feature space, whereas CoT primarily operates through stepwise cognitive scaffolding via enhanced positive samples. Therefore, the inherent synergistic compatibility between CoT and CPO emerges through their complementary roles in reinforcement learning frameworks, with CoT generating reinforcement-aligned positive exemplars and CPO providing causality-attuned negative specimens, jointly orchestrating MLLM training optimization as empirically validated through comprehensive benchmarking results achieving state-of-the-art performance. + +# 4 Conclusion and Outlooks + +In this paper, we present counterfactual preference optimization (CPO), a novel, robust and generalized reinforced custom-tuning paradigm tailored for non-stationary environments. We employ concept drift theory to methodically formalize the bias within the autoregressive token generation of MLLMs and put forward a causal counterfactual thinking to mitigate these detrimental drifts and keep good domain adaptation. By virtue of this framework, CPO is devised to counteract the unpredictable distribution changes occurring within non-stationary environments. + +We hope that our work will inspire future advancements in counterfactual cause of reinforced learning paradigm, specifically addressing biases originating from real-world data challenges. In future research, we will focus on the efficiency of counterfactual causes in reinforced fine-tuning, and broader applications. + +# Acknowledgment + +The work was supported by the Australian Research Council (ARC) under Laureate project FL190100149. + +# References + +[1] Trung, L., X. Zhang, Z. Jie, et al. ReFT: Reasoning with Reinforced Fine-Tuning. In L.-W. Ku, A. Martins, V. Srikumar, eds., Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7601-7614. Association for Computational Linguistics, Bangkok, Thailand, 2024. +[2] Liu, Z., Z. Sun, Y. Zang, et al. Visual-RFT: Visual Reinforcement Fine-Tuning, 2025. +[3] Alayrac, J.-B., J. Donahue, P. Luc, et al. Flamingo: A Visual Language Model for Few-Shot Learning. Advances in Neural Information Processing Systems, 35:23716-23736, 2022. +[4] Bai, J., S. Bai, S. Yang, et al. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond, 2023. +[5] Dai, W., J. Li, D. Li, et al. InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning, 2023. +[6] Lu, J., A. Liu, F. Dong, et al. Learning under Concept Drift: A Review. 31(12):2346-2363, 2019. +[7] Yang, X., J. Lu, E. Yu. Adapting multi-modal large language model to concept drift from pre-training onwards. arXiv preprint arXiv:2405.13459, 2025. +[8] Pearl, J. Causal diagrams for empirical research. Biometrika, 82(4):669-688, 1995. +[9] —. Causal inference in statistics: a primer. John Wiley & Sons, 2016. +[10] Yang, X., J. Lu, E. Yu. Causal-informed contrastive learning: Towards bias-resilient pre-training under concept drift. arXiv preprint arXiv:2502.07620, 2025. +[11] Johnson, A. E., T. J. Pollard, S. J. Berkowitz, et al. Mimic-cxr, a de-identified publicly available database of chest radiographs with free-text reports. _Scientific data_, 6(1):317, 2019. + +[12] Pearl, J. Direct and indirect effects. In Probabilistic and causal inference: the works of Judea Pearl, pages 373-392. 2022. +[13] Zhang, X., C. Wu, Y. Zhang, et al. Knowledge-enhanced visual-language pre-training on chest radiology images. 14(1):4542, 2023. +[14] Zhou, X., X. Zhang, C. Wu, et al. Knowledge-enhanced Visual-Language Pretraining for Computational Pathology, 2024. +[15] Rafailov, R., A. Sharma, E. Mitchell, et al. Direct Preference Optimization: Your Language Model is Secretly a Reward Model. 36:53728-53741, 2023. +[16] Team, Q. Qwen2.5-vl, 2025. +[17] Bannur, S., S. Hyland, F. Liu, et al. Learning to exploit temporal structure for biomedical vision-language processing. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2023. +[18] Chen, Y., S. Xu, A. Sellergren, et al. Coca-cxr: Contrastive captioners learn strong temporal structures for chest x-ray vision-language understanding, 2025. +[19] Yang, Z., L. Shen. Tempa-vlp: Temporal-aware vision-language pretraining for longitudinal exploration in chest x-ray image. In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 4625-4634. 2025. +[20] Wu, J. T., N. N. Agu, I. Lourentzou, et al. Chest imagenome dataset for clinical reasoning. arXiv preprint arXiv:2108.00316, 2021. +[21] Bannur, S., S. Hyland, Q. Liu, et al. Learning to exploit temporal structure for biomedical vision-language processing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15016-15027. 2023. +[22] Karwande, G., A. B. Mbakwe, J. T. Wu, et al. Chexrelnet: An anatomy-aware model for tracking longitudinal relationships between chest x-rays. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 581-591. Springer, 2022. +[23] Boecking, B., N. Usuyama, S. Bannur, et al. Making the most of text semantics to improve biomedical vision-language processing. In European conference on computer vision, pages 1-21. Springer, 2022. +[24] Yang, J., B. Su, X. Zhao, et al. Unlocking the power of spatial and temporal information in medical multimodal pre-training. In _Forty-first International Conference on Machine Learning_ 2024. +[25] Shao, Z., P. Wang, Q. Zhu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024. +[26] Yu, Q., Z. Zhang, R. Zhu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476, 2025. +[27] Chen, Z., Y. Song, T.-H. Chang, et al. Generating radiology reports via memory-driven transformer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1439-1449. 2020. +[28] Liu, F., X. Wu, S. Ge, et al. Exploring and distilling posterior and prior knowledge for radiology report generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13753-13762. 2021. +[29] You, D., F. Liu, S. Ge, et al. Aligntransformer: Hierarchical alignment of visual regions and disease tags for medical report generation. In Medical Image Computing and Computer Assisted Intervention-MICCAI 2021: 24th International Conference, Strasbourg, France, September 27-October 1, 2021, Proceedings, Part III 24, pages 72-82. Springer, 2021. + +[30] Liu, F., S. Ge, X. Wu. Competence-based multimodal curriculum learning for medical report generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3001-3012. 2021. +[31] Yan, B., M. Pei. Clinical-bert: Vision-language pre-training for radiograph diagnosis and reports generation. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pages 2982-2990. 2022. +[32] Wang, Z., L. Liu, L. Wang, et al. Metransformer: Radiology report generation by transformer with multiple learnable expert tokens. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11558-11567. 2023. +[33] Li, M., B. Lin, Z. Chen, et al. Dynamic graph enhanced contrastive learning for chest x-ray report generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3334-3343. 2023. +[34] Wang, Z., L. Liu, L. Wang, et al. R2gengpt: Radiology report generation with frozen llms. Meta-Radiology, 1(3):100033, 2023. +[35] Jin, H., H. Che, Y. Lin, et al. Promptmrg: Diagnosis-driven prompts for medical report generation. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pages 2607-2615. 2024. +[36] Liu, C., Y. Tian, W. Chen, et al. Bootstrapping large language models for radiology report generation. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pages 18635-18643. 2024. +[37] Wang, X., F. Wang, Y. Li, et al. CXPMRG-Bench: Pre-training and Benchmarking for X-ray Medical Report Generation on CheXpert Plus Dataset, 2024. +[38] Wang, Z., Z. Wu, D. Agarwal, et al. Medclip: Contrastive learning from unpaired medical images and text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing, vol. 2022, page 3876. 2022. +[39] Zhang, S., Y. Xu, N. Usuyama, et al. Biomedclip: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs. arXiv preprint arXiv:2303.00915, 2023. +[40] Huang, S.-C., L. Shen, M. P. Lungren, et al. Gloria: A multimodal global-local representation learning framework for label-efficient medical image recognition. In Proceedings of the IEEE/CVF international conference on computer vision, pages 3942-3951. 2021. +[41] Tiu, E., E. Talius, P. Patel, et al. Expert-level detection of pathologies from unannotated chest x-ray images via self-supervised learning. Nature biomedical engineering, 6(12):1399-1406, 2022. +[42] Wu, C., X. Zhang, Y. Zhang, et al. Medklip: Medical knowledge enhanced language-image pre-training for x-ray diagnosis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 21372-21383. 2023. +[43] Zhang, X., C. Wu, Y. Zhang, et al. Knowledge-enhanced visual-language pre-training on chest radiology images. Nature Communications, 14(1):4542, 2023. +[44] Lai, H., Q. Yao, Z. Jiang, et al. Carzero: Cross-attention alignment for radiology zero-shot classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11137–11146. 2024. +[45] Demner-Fushman, D., S. Antani, M. Simpson, et al. Design and development of a multimodal biomedical information retrieval system. Journal of Computing Science and Engineering, 6(2):168-177, 2012. + +[46] Bustos, A., A. Pertusa, J.-M. Salinas, et al. Padchest: A large chest x-ray image dataset with multi-label annotated reports. Medical image analysis, 66:101797, 2020. +[47] Wang, X., Y. Peng, L. Lu, et al. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017. +[48] Irvin, J., P. Rajpurkar, M. Ko, et al. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI conference on artificial intelligence, vol. 33, pages 590-597. 2019. +[49] Liu, J., J. Lian, Y. Yu. Chestx-det10: Chest x-ray dataset on detection of thoracic abnormalities, 2020. +[50] Lu, J., A. Liu, Y. Song, et al. Data-driven decision support under concept drift in streamed big data. Complex & intelligent systems, 6(1):157-163, 2020. +[51] Wang, K., L. Xiong, A. Liu, et al. A self-adaptive ensemble for user interest drift learning. 577:127308, 2024. +[52] Jiao, B., Y. Guo, D. Gong, et al. Dynamic Ensemble Selection for Imbalanced Data Streams With Concept Drift. 35(1):1278-1291, 2024. +[53] Cerqueira, V., H. M. Gomes, A. Bifet, et al. STUDD: A student–teacher method for unsupervised concept drift detection. 112(11):4351–4378, 2023. +[54] Yang, X., Y. Chen, X. Yue, et al. T-distributed Spherical Feature Representation for Imbalanced Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9):10825-10833, 2023. +[55] Yu, E., J. Lu, B. Zhang, et al. Online boosting adaptive learning under concept drift for multistream classification. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pages 16522-16530. 2024. +[56] Yu, E., Y. Song, G. Zhang, et al. Learn-to-adapt: Concept drift adaptation for hybrid multiple streams. 496:121-130, 2022. +[57] Yu, H., W. Liu, J. Lu, et al. Detecting group concept drift from multiple data streams. 134:109113, 2023. +[58] Liu, W., X. Yue, Y. Chen, et al. Trusted multi-view deep learning with opinion aggregation. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pages 7585-7593. 2022. +[59] Liu, W., Y. Chen, X. Yue. Enhancing testing-time robustness for trusted multi-view classification in the wild. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 15508-15517. 2025. +[60] —. Building trust in decision with conformalized multi-view deep classification. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 7278-7287. 2024. +[61] Li, W., X. Yang, W. Liu, et al. DDG-DA: Data Distribution Generation for Predictable Concept Drift Adaptation. 36(4):4092–4100, 2022-06-28. +[62] Yang, X., H. Zhang, J. Cai. Deconfounded Image Captioning: A Causal Retrospect. 45(11):12996-13010, 2023. +[63] Liu, B., D. Wang, X. Yang, et al. Show, Deconfound and Tell: Image Captioning With Causal Inference. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18041-18050. 2022. +[64] Zhang, C., L. Zhang, D. Zhou. Causal Walk: Debiasing Multi-Hop Fact Verification with Front-Door Adjustment, 2024. + +[65] Choi, S., M. Jeong, H. Han, et al. C2L: Causally Contrastive Learning for Robust Text Classification. 36(10):10526-10534, 2022. +[66] Rohekar, R. Y., Y. Gurwicz, S. Nisimov. Causal Interpretation of Self-Attention in Pre-Trained Transformers. 36:31450–31465, 2023. +[67] Yang, X., H. Zhang, G. Qi, et al. Causal Attention for Vision-Language Tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9847-9857. 2021. +[68] Yang, X., L. Xu, H. Li, et al. One leaf reveals the season: Occlusion-based contrastive learning with semantic-aware views for efficient visual representation. In _Forty-second International Conference on Machine Learning_ 2025. +[69] Christiano, P. F., J. Leike, T. Brown, et al. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017. +[70] Ouyang, L., J. Wu, X. Jiang, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744, 2022. +[71] Jaech, A., A. Kalai, A. Lerer, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720, 2024. +[72] Bai, Y., S. Kadavath, S. Kundu, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022. +[73] Guo, D., D. Yang, H. Zhang, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025. +[74] Gulcehre, C., T. L. Paine, S. Srinivasan, et al. Reinforced self-training (rest) for language modeling, 2023. +[75] Yuan, Z., H. Yuan, C. Li, et al. Scaling relationship on learning mathematical reasoning with large language models, 2024. +[76] Zeng, W., Y. Huang, L. Zhao, et al. B-STar: Monitoring and balancing exploration and exploitation in self-taught reasoners. In The Thirteenth International Conference on Learning Representations. 2025. +[77] Zhang, Z., C. Zheng, Y. Wu, et al. The lessons of developing process reward models in mathematical reasoning. arXiv preprint arXiv:2501.07301, 2025. +[78] Yang, X., L. Xu, H. Sun, et al. Enhancing visual grounding and generalization: A multi-task cycle training approach for vision-language models. arXiv preprint arXiv:2311.12327, 2024. +[79] Yang, X., J. Lu, E. Yu. Learning from all: Concept alignment for autonomous distillation from multiple drifting mllms. arXiv preprint arXiv:2510.04142, 2025. +[80] Li, Z.-Z., D. Zhang, M.-L. Zhang, et al. From system 1 to system 2: A survey of reasoning large language models. arXiv preprint arXiv:2502.17419, 2025. +[81] Chen, Q., L. Qin, J. Liu, et al. Towards reasoning era: A survey of long chain-of-thought for reasoning large language models. arXiv preprint arXiv:2503.09567, 2025. +[82] Zhang, Z., A. Zhang, M. Li, et al. Multimodal chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923, 2023. +[83] Chen, Q., L. Qin, J. Zhang, et al. $m^3$ cot: A novel benchmark for multi-domain multi-step multi-modal chain-of-thought. arXiv preprint arXiv:2405.16473, 2024. +[84] Wang, Y., S. Wu, Y. Zhang, et al. Multimodal chain-of-thought reasoning: A comprehensive survey. arXiv preprint arXiv:2503.12605, 2025. +[85] Zheng, G., B. Yang, J. Tang, et al. Ddcot: Duty-distinct chain-of-thought prompting for multimodal reasoning in language models. Advances in Neural Information Processing Systems, 36:5168-5191, 2023. + +[86] Liu, W., Y. Chen, X. Yue, et al. Enhancing reliability in medical image classification of imperfect views. IEEE Transactions on Circuits and Systems for Video Technology, 2025. + +# NeurIPS Paper Checklist + +The checklist is designed to encourage best practices for responsible machine learning research, addressing issues of reproducibility, transparency, research ethics, and societal impact. Do not remove the checklist: The papers not including the checklist will be desk rejected. The checklist should follow the references and follow the (optional) supplemental material. The checklist does NOT count towards the page limit. + +Please read the checklist guidelines carefully for information on how to answer these questions. For each question in the checklist: + +- You should answer [Yes], [No], or [NA]. +- [NA] means either that the question is Not Applicable for that particular paper or the relevant information is Not Available. +- Please provide a short (1–2 sentence) justification right after your answer (even for NA). + +The checklist answers are an integral part of your paper submission. They are visible to the reviewers, area chairs, senior area chairs, and ethics reviewers. You will be asked to also include it (after eventual revisions) with the final version of your paper, and its final version will be published with the paper. + +The reviewers of your paper will be asked to use the checklist as one of the factors in their evaluation. While "[Yes]" is generally preferable to "[No]", it is perfectly acceptable to answer "[No]" provided a proper justification is given (e.g., "error bars are not reported because it would be too computationally expensive" or "we were unable to find the license for the dataset we used"). In general, answering "[No]" or "[NA]" is not grounds for rejection. While the questions are phrased in a binary way, we acknowledge that the true answer is often more nuanced, so please just use your best judgment and write a justification to elaborate. All supporting evidence can appear either in the main paper or the supplemental material, provided in appendix. If you answer [Yes] to a question, in the justification please point to the section(s) where related material for the question can be found. + +IMPORTANT, please: + +- Delete this instruction block, but keep the section heading "NeurIPS Paper Checklist", +- Keep the checklist subsection headings, questions/answers and guidelines below. +- Do not modify the questions and only use the provided macros for your answers. + +# 1. Claims + +Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? + +Answer: [Yes] + +Justification: The main claims made in the abstract and introduction accurately reflect the paper's contributions and scope. + +Guidelines: + +- The answer NA means that the abstract and introduction do not include the claims made in the paper. +- The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. +- The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. +- It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. + +# 2. Limitations + +Question: Does the paper discuss the limitations of the work performed by the authors? + +Answer: [Yes] + +Justification: We discuss limitations of our approach and the outlook of further work in the section of conclusions. + +# Guidelines: + +- The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. +- The authors are encouraged to create a separate "Limitations" section in their paper. +- The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. +- The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. +- The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. +- The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. +- If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. +- While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. + +# 3. Theory assumptions and proofs + +Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? + +Answer: [Yes] + +Justification: We provide the full set of assumptions and a complete proof in the main manuscript and the supplemental. + +Guidelines: + +- The answer NA means that the paper does not include theoretical results. +- All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced. +- All assumptions should be clearly stated or referenced in the statement of any theorems. +- The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. +- Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. +- Theorems and Lemmas that the proof relies upon should be properly referenced. + +# 4. Experimental result reproducibility + +Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? + +Answer: [Yes] + +Justification: We provide full information about the replication experiments in the main manuscript and make our code and data publicly available. + +# Guidelines: + +- The answer NA means that the paper does not include experiments. +- If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. +- If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. +- Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. +- While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example +(a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. +(b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. +(c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). +(d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. + +# 5. Open access to data and code + +Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? + +Answer: [Yes] + +Justification: We made our code and data publicly available on github as anonymous. The anonymous link is https://anonymous.4open.science/r/CPO-FD61/. + +# Guidelines: + +- The answer NA means that paper does not include experiments requiring code. +- Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details. +- While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). +- The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details. +- The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. +- The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. + +- At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). +- Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. + +# 6. Experimental setting/details + +Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? + +Answer: [Yes] + +Justification: We present the details of the experiments in the supplemental material. + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. +- The full details can be provided either with the code, in appendix, or as supplemental material. + +# 7. Experiment statistical significance + +Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? + +Answer: [Yes] + +Justification: We report results of experiments with statistical significance. + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. +- The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). +- The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) +- The assumptions made should be given (e.g., Normally distributed errors). +- It should be clear whether the error bar is the standard deviation or the standard error of the mean. +- It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified. +- For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). +- If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. + +# 8. Experiments compute resources + +Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? + +Answer: [Yes] + +Justification: We provide the information about computer resources in the supplemental material. + +Guidelines: + +- The answer NA means that the paper does not include experiments. + +- The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. +- The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. +- The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). + +# 9. Code of ethics + +Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines? + +Answer: [Yes] + +Justification: We made our code and data publicly as anonymous in the review stage. + +Guidelines: + +- The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. +- If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. +- The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). + +# 10. Broader impacts + +Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? + +Answer: [Yes] + +Justification: We discuss the broader impacts of our work in the section of conclusions. + +Guidelines: + +- The answer NA means that there is no societal impact of the work performed. +- If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. +- Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. +- The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. +- The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. +- If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). + +# 11. Safeguards + +Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? + +Answer: [NA] + +Justification: We have no such risks. + +# Guidelines: + +- The answer NA means that the paper poses no such risks. +- Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. +- Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. +- We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. + +# 12. Licenses for existing assets + +Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? + +Answer: [Yes] + +Justification: We cite the original paper that produced the code package or dataset. + +# Guidelines: + +- The answer NA means that the paper does not use existing assets. +- The authors should cite the original paper that produced the code package or dataset. +- The authors should state which version of the asset is used and, if possible, include a URL. +- The name of the license (e.g., CC-BY 4.0) should be included for each asset. +- For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. +- If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. +- For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. +- If this information is not available online, the authors are encouraged to reach out to the asset's creators. + +# 13. New assets + +Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? + +Answer: [Yes] + +Justification: We have talked about the details of the dataset and code as part of our submissions. + +# Guidelines: + +- The answer NA means that the paper does not release new assets. +- Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. +- The paper should discuss whether and how consent was obtained from people whose asset is used. +- At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. + +# 14. Crowdsourcing and research with human subjects + +Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? + +# Answer: [NA] + +Justification: We do not involve crowdsourcing nor research with human subjects. + +# Guidelines: + +- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. +- Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. +- According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. + +# 15. Institutional review board (IRB) approvals or equivalent for research with human subjects + +Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? + +# Answer: [NA] + +Justification: We do not involve crowdsourcing nor research with human subjects. + +# Guidelines: + +- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. +- Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. +- We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. +- For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. + +# 16. Declaration of LLM usage + +Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required. + +# Answer: [NA] + +Justification: The core method development in this research does not involve LLMs as any important, original, or non-standard components. + +# Guidelines: + +- The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components. +- Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described. + +# Appendix + +# A Related Works + +# A.1 Concept Drift + +In their survey spanning multiple studies, Lu et al. [6, 50] establish a comprehensive taxonomy of concept drift mitigation strategies, categorizing prevailing approaches into three principal paradigms: error rate-driven adaptations [51, 52], data distribution-aware methodologies [7, 53, 54], and multi-hypothesis frameworks [55, 56]. Our work aligns with the distribution-centric paradigm, which distinguishes itself through its dual capacity for both precise drift detection via rigorous statistical analysis and holistic drift characterization across temporal, spatial, and quantitative dimensions. Specifically, these distribution-driven techniques enable not merely the identification of concept drift occurrence but also facilitate granular diagnostics through temporal localization of drift emergence, feature subspace attribution, and severity quantification - capabilities that render them particularly advantageous for developing interpretable adaptive systems requiring both drift awareness and targeted model recalibration. + +Recent advances in concept drift adaptation have yielded sophisticated methodologies across diverse learning scenarios. The Online Boosting Adaptive Learning (OBAL) framework [55] has emerged as a dual-phase solution for multistream classification challenges, initially employing Adaptive Covariate Shift Adaptation (AdaCOSA) to model dynamic inter-stream correlations before transitioning to Gaussian Mixture Model-based weighting for asynchronous drift mitigation. Complementing this, CDMLLM [7] reveals critical vulnerabilities in vision-language models through systematic analysis of concept drift-induced biases across pre-training and fine-tuning stages, proposing a unified framework that synergizes T-distribution adaptation for long-tailed calibration with explicit out-of-distribution detection to enhance multimodal alignment robustness. Expanding the scope beyond individual data streams, GDDM [57] introduces a distribution-free statistical framework for detecting subtle group-level concept drifts in multi-stream environments through adaptive hypothesis testing mechanisms. Additionally, the studies by [58-60] introduce a multi-perspective uncertainty framework designed to handle concept drift in diverse data streams. This approach utilizes set-based prediction methods to unify probabilistic results into clear categorical formats. In parallel, DDG-DA [61] pioneers anticipatory concept drift adaptation by modeling environmental evolution through predictive factor analysis and synthetic data generation, effectively bridging current observations with projected distribution shifts. Advancing unsupervised detection paradigms, STUDD [53] establishes a teacher-student discrepancy framework that leverages predictive consistency analysis to enable label-agnostic drift identification while maintaining detection sensitivity, thereby addressing practical deployment constraints in evolving data environments. + +# A.2 Causal Inference + +Recently, increasing researchers have incorporated causal inference into deep-learning models, especially in large models. Deconfounded Image Captioning (DIC) [62] is proposed to address dataset bias in vision-language models through a causal lens, that integrates backdoor and front-door adjustments for systematic bias mitigation. The framework provides principled causal analysis of spurious correlations in multimodal alignment, offering theoretical grounding for decomposing bias sources through structured interventions. Likewise, aiming for spurious correlations induced by visual and linguistic biases during training, CIIC [63] is proposed as a causal intervention framework combining an Interventional Object Detector (IOD) and Interventional Transformer Decoder (ITD) guided by structural causal models. By applying backdoor adjustment through IOD's feature disentanglement and ITD's dual de-confounding mechanism, their approach systematically mitigates confounding effects across encoding and decoding stages, demonstrating enhanced generalization through causal correlation modeling. Similarly, targeting multi-hop fact verification bias in the large language model, Causal Walk [64] is proposed, a front-door adjustment framework that disentangles complex spurious correlations in evidence chains. The method models reasoning paths as mediators in structural causal models, decomposing causal effects via random walk-based treatment-mediator estimation and geometric mean-based mediator-outcome approximation. By integrating adversarial and symmetric datasets synthesized with large language models, the approach demonstrates superior debiasing performance. + +Recent advances in causal representation learning have produced innovative methodologies to address confounding biases in large models. The C2L framework [65] tackles model fragility through contrastive counterfactual synthesis, introducing a collective decision mechanism that aggregates predictions across probabilistically generated counterfactual sets while enforcing causal invariance via distributional consensus supervision, thereby overcoming dataset-inherent bias limitations of conventional augmentation approaches. Building on causal interpretability, ABCD [66] establishes formal theoretical grounding for Transformer architectures by reinterpreting self-attention mechanisms as structural equation estimators that capture conditional independence relations through partial correlation analysis in deep attention layers, enabling zero-shot causal discovery over input sequences while accounting for latent confounders through repurposed pre-trained models. Expanding the causal intervention paradigm, Causal Attention (CATT) [67] implements front-door adjustment via dual-path processing of In-Sample and Cross-Sample Attention, strategically integrating external contextual information through CS-ATT while preserving standard attention mechanisms to dynamically mitigate spurious correlations without explicit confounder specification, thereby achieving bias-resistant vision-language alignment through implicit causal disentanglement. Moreover, ResilientCL [10, 68] proposes the causal interventional contrastive objective to mitigate the concept drift within the momentum network of contrastive pre-training paradigm. + +# A.3 Reinforced Fine-tuning + +The integration of reinforcement learning (RL) into post-training alignment of large language models (LLMs) has undergone remarkable evolution since OpenAI's seminal work on Reinforcement Learning from Human Feedback (RLHF) [69], which established a foundational paradigm for aligning model outputs with human values [70]. While early implementations like OpenAI-o1 [71] demonstrated the efficacy of human preference modeling, the prohibitive costs of manual annotation have catalyzed a paradigm shift toward automated reward generation through pre-trained systems. This transition has yielded innovative methodologies ranging from Bai et al.'s [72] constitutional approach utilizing sparse natural language feedback as proxy signals, to DeepSeek's progressive framework that first established baseline performance through pure RL (R0) before introducing their R1 variant [73]. The latter achieved enhanced generalization through cyclic alternation between supervised fine-tuning and their novel GRPO optimization protocol [25], exemplifying the field's progression toward self-contained alignment systems. + +Besides, the landscape of alignment methodologies continues to diversify through innovative paradigms: ReST [74] employs iterative self-generation of policy-derived samples to refine LLMs via offline reinforcement learning, while DPO [15] fundamentally reformulates alignment as direct preference optimization through implicit reward modeling. Concurrent developments span Rejection Sampling Fine-Tuning's [75] curation of validated reasoning trajectories for supervised augmentation, and ReFT's [1] phased optimization combining SFT initialization with PPO-driven exploration of automated reasoning path generation. Building upon these foundations, Visual-RFT [2] extends GRPO-based strategies to multimodal contexts, enhancing visual-language alignment under data scarcity, whereas B-STaR [76] introduces dynamic configuration adaptation for self-teaching systems through principled exploration-exploitation balancing. Pushing the boundaries of evaluation rigor, Qwen-Math-PRM [77] synergizes Monte Carlo estimation with LLM-as-judge consensus filtering while pioneering a hierarchical assessment framework integrating stepwise and holistic performance metrics. Moreover, ViLaM [78] performs visual grounding unsupervised via reinforced learning under the open-world environment. Besides, APO [79] leverages reinforcement learning to align the knowledge in multiple teacher models for distillation. + +# A.4 Multimodal Reasoning + +Recent advances in Long Chain-of-Thought (Long CoT) reasoning [80, 81] have significantly enhanced the capacity of Large Language Models (LLMs) to perform multi-step reasoning and self-correction. By incorporating self-reflection strategies, these models can dynamically diagnose and revise their intermediate reasoning traces, thereby mitigating certain types of reasoning inconsistencies during inference. In contrast, our approach introduces a proactive intervention at the training stage through counterfactual sample generation, which explicitly shapes the model's causal representations and decision boundaries. This enables the model to internalize more consistent causal reasoning patterns from the outset, rather than relying on post-hoc correction during inference. Hence, we view our method and Long CoT-based self-reflection strategies as complementary: while Long CoT + +enhances reasoning reliability at runtime, our approach strengthens causal robustness during learning. We believe future work could explore integrating both directions—leveraging Long CoT's reflective inference mechanisms together with counterfactual training—to further advance consistent causal reasoning in reasoning-oriented LLMs. + +In parallel, Multimodal Chain-of-Thought (MM-CoT) reasoning [82-85] has emerged as a crucial paradigm for enabling LLMs to align and reason across heterogeneous modalities such as vision and language. For instance, M3CoT [83] and Ddcot [85] introduce multi-domain, multi-step reasoning frameworks that emphasize structured cross-modal inference, while the survey in [84] provides a comprehensive taxonomy of MM-CoT paradigms and benchmarks. Compared with these inference-time reasoning approaches, our method introduces a proactive causal intervention during training through counterfactual sample generation, aiming to enhance causal robustness within multimodal reasoning processes. We view our approach as complementary to both Long CoT and MM-CoT paradigms—while they improve the reasoning trajectory at inference time, ours focuses on stabilizing causal representations during model optimization. This synergy could be a promising direction for future research. + +![](images/dda4c0294a580c5bd49bfee4499fa6a8ef5bb460700e37c591d09a59020cf052.jpg) +Figure 5: Samples of CXR-CounterFact (CCF) Dataset. + +# Ground Truth: + +Findings: + +PA and lateral views of the chest demonstrate low lung volumes. Tiny bilateral pleural effusions are new since No signs of pneumonia or pulmonary vascular congestion. Heart is top normal in size though this is stable. Aorta is markedly tortuous, unchanged. Aortic arch calcifications are seen. There is no pneumothorax. No focal consolidation. Partially imaged upper abdomen is unremarkable. + +Diagnosis: + +The disease of this patient is Pleural Effusion. + +# Counterfactual Thinking: + +Findings: + +PA and lateral views of the chest demonstrate low lung volumes with new signs of consolidation in both lower lobes. Tiny bilateral pleural effusions are noted. No signs of pneumonia were previously present. There is evidence of pulmonary vascular congestion. The heart is slightly enlarged, though this has not been stable. The aorta is markedly tortuous, unchanged. Aortic arch calcifications are seen. There is no pneumothorax. The partially imaged upper abdomen is unremarkable. + +Diagnosis: + +The disease of this patient is Pneumonia. + +Findings: + +PA and lateral views of the chest demonstrate low lung volumes. Tiny bilateral pleural effusions are new since last examination. No signs of pneumonia or pulmonary vascular congestion. Heart is top normal in size though this is stable. Aorta is markedly tortuous, unchanged. Aortic arch calcifications are seen. There is no pneumothorax. Focal consolidation is noted in the right lower lobe. Partially imaged upper abdomen is unremarkable. + +Diagnosis: + +The disease of this patient is Consolidation. + +![](images/19022a64ce5d454d37c11a30f91847e567d390d383a57a3b0daa67488e5d7e96.jpg) + +# Ground Truth: + +Findings: + +The patient's overlying arm on the lateral view partially obscures the view and makes evaluation of the lateral view suboptimal. Left greater than right biapical scarring is noted. No pleural effusion or pneumothorax is seen. The cardiac and mediastinal silhouettes are unremarkable. Multiple surgical clips are noted overlying the left hemi thorax and the left axilla. Difficult to assess for medial left clavicular injury, nondisplaced fracture not excluded. + +Diagnosis: + +The disease of this patient is Lung Opacity. + +# Counterfactual Thinking: + +Findings: + +The patient's overlying arm on the lateral view partially obscures the view and makes evaluation of the lateral view suboptimal. Left greater than right biapical scarring is noted. A small left-sided pneumothorax is suspected but difficult to assess definitively due to the arm overlap. No pleural effusion is seen. The cardiac and mediasinal silhouettes are unremarkable. Multiple surgical clips are noted overlying the left hemithorax and the left axilla. Nondisplaced fracture of the medial left clavicle cannot be excluded. + +Diagnosis: + +The disease of this patient is Pneumothorax. + +Findings: + +The patient's overlying arm on the lateral view partially obscures the view and makes evaluation of the lateral view suboptimal. Left greater than right biapical scarring is noted with evidence of edema in the left lower lobe. No pleural effusion or pneumothorax is seen. The cardiac and mediastinal silhouettes are unremarkable. Multiple surgical clips are noted overlying the left hemithorax and the left axilla. Difficult to assess for medial left clavicular injury, nondisplaced fracture not excluded. + +Diagnosis: + +The disease of this patient is Edema. + +# B CXR-CounterFact (CCF) Dataset + +The motivation for choosing chest diagnostics as the application is due to the abundance of public professional medical diagnosis reports [86], which serve as rich CoT reasoning process. This domain provides an ideal platform for studying MLLM reasoning processes and validating the robustness of our proposed CPO in real-world settings. + +Figure 5 showcases the samples utilized for training and validation in our study. We use a medical-specific LLM to generate the related caption of the image, with the prompt of: + +"This is a radiology chest DR examination report of a patient: . + +This is a diagram of the relationship between lung diseases and their radiographic manifestations: + +Please generate a counterfactual radiology text showing $<$ disease> based on the relationship and above context, with the same formatting." + +As depicted in Figure 5, comprehensive descriptions of the image are provided through long-form text, encompassing details such as size, position, relationships, and other relevant information about the disease present in the image. This ensures a detailed and information-rich depiction of the visual content. We have publicly released the datasets used for training and validation. + +# C Implementation Details + +In this section, implementation details are provided. + +In terms of the supervised fine-tuning progress, the hyperparameters are presented in Table 6. Qwen2.5-VL (7B) [16] is applied as our pre-trained model. During the SFT, we utilize the AdamW optimizer, which is configured with a cosine annealing schedule as the learning policy. The initial learning rate is set to $1 \times 10^{-4}$ , and the AdamW optimizer is employed with hyperparameters $\beta = (0.9, 0.98)$ . Additionally, we set the weight decay to 0.05 and the dropout rate to 0.1. During the first 20 warm-up steps, the learning rate increases to $1 \times 10^{-4}$ , and subsequently decays to $10^{-7}$ . Unless otherwise specified, the supervised fine-tuning of our multi-modal large language model consists of 660 steps, executed on $2 \times 2$ NVIDIA A100 GPUs. + +Table 6: The training hyperparameters of our MLLM. + +
Supervised Fine-tuning
Counterfactual Preference Optimization
Training Steps660
Warmup Steps20
Warmup Ratio0.05
OptimizerAdamW
Learning Rate1e-4
Learning Rate DecayCosine
Adam β(0.9, 0.98)
Weight Decay0.05
Batch Size15
7,750
0AdamW
2e-5
Cosine
(0.9, 0.98)
0.05
4
+ +While in the counterfactual preference optimization (CPO), the initial learning rate is reduced to $2 \times 10^{-5}$ without the warmup. The visual encoder and text decoder are frozen out of the training. Thus, the batch size can be decreased to 4. The reinforced custom-tuning consists of 7,750 steps, executed on $2 \times 2$ NVIDIA A100 GPUs. Other training parameters are the same as the fine-tuning. + +It is worth noting that, both the SFT and CPO models were trained for exactly one epoch, as suggested by Qwen2.5-v1 [4]. The difference in the number of training steps arises solely because the CPO dataset is approximately three times larger than the SFT dataset due to the addition of counterfactual trajectories. Therefore, both models completed their training after seeing their respective datasets once, indicating comparable convergence points in terms of epoch count, rather than SFT being stopped prematurely. + +# C.1 Computational Cost + +The main computational overhead of our framework stems from counterfactual sample generation using LLM experts. To ensure feasibility, we employ a targeted generation strategy based on the hierarchical concept graph, which encodes disease entities, radiographic features, and their statistical dependencies. This allows us to selectively perturb drift-prone features and generate only the two most relevant counterfactual samples per instance, rather than performing random perturbations. + +Moreover, all counterfactuals are generated once offline rather than dynamically during training. The construction of the CCF dataset took approximately five days on four A100 GPUs, including both inference and validation. Although this process is computationally intensive, it represents a one-time cost that is affordable for real-world deployment. The subsequent CPO fine-tuning and evaluation were performed on $2 \times \mathrm{A}100$ GPUs with negligible extra overhead compared to standard fine-tuning. + +# C.2 Concept Graph Construction + +The prompt we used for automated extraction from the MIMIC-CXR dataset is as follows: + +```txt +TASK ROLE: You are a senior chest radiologist. I will provide a large number of chest DR case report texts. Please automatically extract key imaging feature words related to various chest diseases from these reports, and use these features to build a structured imaging knowledge graph to reveal the association between different diseases based on common imaging features. +``` + +CORE REQUIREMENTS: +```txt +1. Standardized feature extraction: +- Extract all abnormal imaging descriptors from reports. +- Normalize terminology using Radiology Lexicon (RadLex) and Fleischner Society guidelines. +``` + +```txt +2. Disease-feature mapping: - Link standardized features to diagnoses diseases per report. - Identical features MUST use identical normalized terms across diseases. +``` + +```txt +3. Knowledge graph construction: - Nodes: Diseases, such as Pneumothorax, Atelectasis and Pneumonia; Features, such as lung opacity and air bronchograms. - Relationship: (Disease) - [HAS\Feature] -> (Feature); (Feature) - [ASSOCIATED\_WITH] -> (Disease) - Semantics: Diseases sharing one more identical feature node are interconnected. +``` + +```javascript +OUTPUT REQUIREMENTS: +Please output the final constructed radiological knowledge graph and the standardized feature-disease association data extracted from the report in a structured JSON format. The format is as follows: +{ "data":[ {"diseases":[],"features":[]}, //...cases ], "disease_feature_map":{ "Pneumonia": ["Consolidation", "Ground-glass opacity", "Air bronchogram", "Pleural effusion"], //...relationships } +} +``` \ No newline at end of file diff --git a/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/images.zip b/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1e0fbddafe8ce5abf7ead52196651130c533252c --- /dev/null +++ b/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30b7f0fffa240731cc8e4951a8e67ae3abe283c7323c827af3823bd246198197 +size 517318 diff --git a/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/layout.json b/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..49896939641b511b1023855b3cb6417934ac0b81 --- /dev/null +++ b/NeurIPS/2025/Walking the Tightrope_ Autonomous Disentangling Beneficial and Detrimental Drifts in Non-Stationary Custom-Tuning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2ab7a14104ccaab8711c42fb6f4df374aa299e7a7e5e7eb9c17826b887b4dfc +size 825376 diff --git a/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/52c771b2-b6d2-47fa-9bec-199f0a35100b_content_list.json b/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/52c771b2-b6d2-47fa-9bec-199f0a35100b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9882002c326fb9fdc465a2d0ca3275faaaea3212 --- /dev/null +++ b/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/52c771b2-b6d2-47fa-9bec-199f0a35100b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0375c511446779bc3805dda20636e6ee0f1faa9f57482b5b76a4d7318f853998 +size 183293 diff --git a/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/52c771b2-b6d2-47fa-9bec-199f0a35100b_model.json b/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/52c771b2-b6d2-47fa-9bec-199f0a35100b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..13d6c74b44cf8f8235c462a3ce11672b6b19cf34 --- /dev/null +++ b/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/52c771b2-b6d2-47fa-9bec-199f0a35100b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7c2dbbaaa2175b7ef46d8f1906511aaf5af788ecd058ebf3a8bea21c2f1cfd1 +size 232193 diff --git a/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/52c771b2-b6d2-47fa-9bec-199f0a35100b_origin.pdf b/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/52c771b2-b6d2-47fa-9bec-199f0a35100b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..528ff07d9f2ea532b3429945d5d43eb6a426a128 --- /dev/null +++ b/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/52c771b2-b6d2-47fa-9bec-199f0a35100b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:919e63d1c33e3e70ba89ef55534dfb31b3bda266f3b1de5f2655b49bdf436fab +size 11015567 diff --git a/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/full.md b/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b23bd596827c7ad36f4caf5418ca3a7c9bd54c76 --- /dev/null +++ b/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/full.md @@ -0,0 +1,930 @@ +# Wan-Move: Motion-controllable Video Generation via Latent Trajectory Guidance + +Ruihang Chu $^{1,2* \dagger \ddagger}$ Yefei He $^{1*}$ Zhekai Chen $^{3*}$ Shiwei Zhang $^{1\dagger}$ Xiaogang Xu $^{4}$ + +Bin Xia + +Dingdong Wang + +Yu Liu + +Hongwei Yi + +Yingya Zhang + +Shiwei Zhang† + +Xihui Liu3 + +Xiaogang Xu + +Hengshuang Zhao3 + +Yujiu Yang $^{2\dagger}$ + +$^{1}$ Tongyi Lab, Alibaba Group + +$^{2}$ Tsinghua University + +$^{3}$ HKU + +4CUHK + +Gibthub: https://github.com/ali-vilab/Wan-Move + +# Abstract + +We present Wan-Move, a simple and scalable framework that brings motion control to video generative models. Existing motion-controllable methods typically suffer from coarse control granularity and limited scalability, leaving their outputs insufficient for practical use. We narrow this gap by achieving precise and high-quality motion control. Our core idea is to directly make the original condition features motion-aware for guiding video synthesis. To this end, we first represent object motions with dense point trajectories, allowing fine-grained control over the scene. We then project these trajectories into latent space and propagate the first frame's features along each trajectory, producing an aligned spatiotemporal feature map that tells how each scene element should move. This feature map serves as the updated latent condition, which is naturally integrated into the off-the-shelf image-to-video model, e.g., Wan-I2V-14B, as motion guidance without any architecture change. It removes the need for auxiliary motion encoders and makes fine-tuning base models easily scalable. Through scaled training, Wan-Move generates 5-second, 480p videos whose motion controllability rivals Kling 1.5 Pro's commercial Motion Brush, as indicated by user studies. To support comprehensive evaluation, we further design MoveBench, a rigorously curated benchmark featuring diverse content categories and hybrid-verified annotations. It is distinguished by larger data volume, longer video durations, and high-quality motion annotations. Extensive experiments on MoveBench and the public dataset consistently show Wan-Move's superior motion quality. Code, models, and benchmark data are made available. + +# 1 Introduction + +Motion lies at the heart of video generation as it fundamentally transforms static images into dynamic visual narratives. Recognizing its importance, both the research community [1, 2, 3] and commercial players [4, 5, 6] have devoted considerable effort to controlling motion in video generative models. + +The essence of motion control lies in injecting a motion guidance signal into the video generation process. Thus, the two key choices are (i) how to represent the guidance signal and (ii) how to integrate it into the generator. First, existing motion guidance representations can be broadly classified into sparse and dense types. Sparse representations include bounding boxes [7, 8] and segmentation masks [1, 9, 10]. Although these signals can steer an object's global movement, they fail to control + +*Equal contribution +†Corresponding authors +Project leader + +![](images/815cc44a61462a51bf75535de11313669722ef63b678e689c4d381c1754f8d3e.jpg) + +![](images/f82b328844485ffcdaf749c8dd3f8207450853a11d3059a8c71e9863e6a5af03.jpg) + +![](images/d99912cdf8d1f6647c1e45e5f9df3c9e9845fa035f0f6c18000b776695abbb9e.jpg) + +![](images/c24210b4e662ea01281aad88a114237af81fe63ae0c0e9153ba827b118423ae4.jpg) + +![](images/c8d6d0ae346ff3a7db9355dc70583a8251c7e57908f00b7fdd0147c3bb973c9a.jpg) + +![](images/c5dc17d37962cc8d38d9e2aa85e324fcb836c6742ce98bf10393e902c9b29d16.jpg) + +![](images/531f55d6293b729e9cc8dc66b3c8f4a15e0dbecdee1bf7fa2530d12a95128506.jpg) + +![](images/8604029f8dd47f7a4fea89bb4bd8627571d88081e342317901562a58dfa1ef3b.jpg) + +![](images/f794f60a0fea88b02f194d50b64005e2418c4aa8afe6de5d22c484d4fddd730e.jpg) + +![](images/583fbd4d69ab1587e62e2d040acdbb50c2dc3ec2e5fe606b92aad5147fe22dd0.jpg) + +![](images/0bc353147075a00664a09c2f3bb2bc73804dc04aab9aaa75a12d6f7a98b0ac92.jpg) + +![](images/9d6d97b61fed330e0e97d3078a1baef1edf77976633e4a4a2853c65e6524532e.jpg) + +![](images/dbb6f711764505b5f454e9ba9aec2ae570f86d8061466460c6d4da9b2353232b.jpg) + +![](images/55a68befdd4c7ad01aa7bf722da43c4b1f38b64152fc7e42dc2757b79639a38c.jpg) + +![](images/30e467f5f94654d58fd3bb41744ec52ef7b5d9a93f408839a4d68a2f06a341db.jpg) + +![](images/fcdd255899109c661cc79493cf6cc3ab9cb576dd95919fe0963927bbbab3c86a.jpg) + +![](images/e003b81589854f9b3aee5269f32623f3cfc2dfcc65a10756a59c36837ae0487c.jpg) + +![](images/1125639a3535a401d2dd7e719a540db4d95e6f763822a1bfdac3d1ca32568b12.jpg) + +![](images/cba432dc77bdbdbb605458ba0fb4941f37eee1e60d7b7c6b3b7281c8a05054af.jpg) + +![](images/c77d87adddc7ee4224160637d931f51bce9b2b5e1ce5bb43e3e6990b7a2ab4c1.jpg) + +![](images/493601a09514b29c155b499d00973432dec070e3fcc0694be45efae491a7797f.jpg) + +![](images/a2c8ba85451fe0a33aece67403c780d922d9233c5f8ca38680c8a5c46dfff807.jpg) + +![](images/12a3eaf35fb11a2fff3429ac87d5e43cdccb687bc3ab9c4378d3bc81d05ce4d2.jpg) + +![](images/1fccee7bfd6eea095aa46b3119f6afd4c87cef6103136ceb02274679134f6660.jpg) + +![](images/3a7fe06a9b2f29af633df4fb140ccde2bf6d78a5035f5e7f0e45645484e882a5.jpg) + +![](images/4b4621098b0d8b4cffd917e16247677938e108673349392c7867d06c83548016.jpg) +Figure 1: Wan-Move is a image-to-video generation framework that supports diverse motion control applications. The generated samples (832×480p, 5s) exhibits high visual fidelity and accurate motion. + +![](images/7fc5dd06f381b99aadeb68328836c42f40791240b3b4d3839080fa076f2a11f6.jpg) + +![](images/70281062086291a37144af58c2f058a79bbb859dc93e1260fec3bbd3f6b8b0e4.jpg) + +![](images/d5f756489352b57b98f2a80975aac9eae37e5f8ed09d183c6d07d7d30521f950.jpg) + +![](images/6a3da8aeae86fba63fb73bd6ab1f79fc600e94ca0fa735686282c5a7c5aef6e1.jpg) + +Input first frame + +Generated video (5s) + +local motions. In contrast, dense representations, such as pixel-wise optical flow [2, 11, 12, 3] and point trajectories [13, 14], offer more fine-grained control ability. Yet, optical flow requires an additional model for flow estimation during inference, which adds cumulative errors across frames and hampers scalability. While point trajectories are easy to specify during inference, each track is only a single-pixel thread and lacks surrounding spatial context. This makes it hard to align textures and motion patterns across neighboring regions. Second, to inject the guidance signal into generative models, a range of motion encoders have been designed [14, 15, 1, 13, 16, 17], with ControlNet [18] being a popular way to fuse motion cues. However, all of these pipelines introduce extra motion-aware modules, which may degrade the motion signal during processing and make it harder to fine-tune the video-generation backbone at scale. + +To tackle these challenges, we present Wan-Move, a novel motion-control framework that builds on the existing image-to-video (I2V) generation model without adding auxiliary motion-processing modules. Our core idea is to inject motion information by directly editing the image condition features. We turn it into an updated latent guide that conveys both appearance and motion throughout video generation. Thanks to this simple and effective design, Wan-Move delivers high-quality motion control and scales easily by fine-tuning the powerful I2V backbone. + +Specifically, we represent motion trajectories using point tracks [13] as they capture fine-grained local and global movement. Unlike prior work [13] that embeds point trajectories into latent features, we transfer each trajectory from pixel space into latent coordinates. As I2V generation aims to animate the first frame, we guide this process by copying the first-frame feature at each tracked position to its corresponding location in later frames along the latent trajectory. Each copied feature preserves rich context, thus the propagated signal drives more natural local motion, as verified in Sec. 5.3. Moreover, since motion guidance is injected by editing the image condition features, we add no extra modules. As a result, Wan-Move can plug straight into the I2V backbone, such as Wan-I2V-14B [19], and support scalable fine-tuning with fast convergence. Fig. 1 shows that Wan-Move generates high-fidelity video clips $(832\times 480\mathrm{p},5\mathrm{s})$ with precise motion control, enabling a diverse set of applications, as illustrated in Sec. 5.4. To our best knowledge, it is the first research model (to be open-sourced) to match the visual quality of commercial products such as Kling 1.5 Pro's Motion Brush [4]. + +To set a rigorous, comprehensive evaluation for motion-control methods, we introduce a free-license benchmark termed MoveBench. Compared with existing benchmarks [20, 21, 22] that offer fewer clips, shorter durations, and incomplete motion annotations, MoveBench provides more data, greater diversity, and reliable motion annotations (Fig. 5). Concretely, we design a curation pipeline to categorize the video library into 54 content categories, 10-25 videos each, giving rise to over 1000 cases to ensure a broad scenario coverage. All video clips maintain a 5-second duration to facilitate evaluation of long-range dynamics. Every clip is paired with detailed motion annotations for single or multiple objects. They include both precise point trajectories and sparse segmentation masks to fit a wide range of motion-control models. We ensure annotation quality by developing an interactive labeling pipeline. It combines human labeling with SAM [23] predictions, marrying annotation precision with automated scalability. In summary, our contributions are as follows: + +- We propose Wan-Move for motion control in image-to-video generation. Unlike prior approaches that require motion encoding, it injects the motion guidance by editing condition features, adding no new modules and allowing the easy fine-tuning of base models at scale. +- We introduce MoveBench, a comprehensive and well-curated benchmark to assess motion control. A hybrid human+SAM labeling pipeline ensures annotation quality. +- Extensive experiments on MoveBench and public datasets show that Wan-Move supports diverse motion-control tasks and delivers commercial-grade results with scaled training. + +# 2 Related Work + +Video generation models. Video diffusion models [24] pioneer the extension of denoising diffusion probabilistic models (DDPMs) to video generation through a 3D U-Net architecture. Subsequent advancements, such asImagen Video [25] and Phenaki [26], enhance this framework to produce longer and higher-resolution sequences. Nevertheless, these CNN-based approaches [27, 28, 29] face limitations in capturing long-range spatiotemporal dependencies. Transformer-based architectures [30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46] overcome this bottleneck and greatly improve training scalability. Recent innovations, including CogVideoX [47] and HunyuanVideo [6], further validate the efficacy of spatio-temporal attention mechanisms for coherent video synthesis. Notably, Wan [19] introduces an efficient framework for both text-to-video and image-to-video generation, setting a new standard for open-source video models. Our Wan-Move, introduces how to leverage latent trajectory guidance to enable motion control upon the image-to-video diffusion model, enabling precise motion control while preserving visual fidelity. + +Motion-controllable video generation. To adapt pretrained video generation models for motion-controllable synthesis, training-free methods [48, 49, 50, 51] optimize input noisy latents or manipulate attention mechanisms, enabling zero-shot control. However, these approaches often exhibit performance degradation when controlling fine-grained or multi-object motion. In contrast, fine-tuning based methods [52, 2, 15, 1, 3, 14, 20, 53, 16, 54, 55, 56, 10, 12, 57, 58, 59] leverage diverse motion signals and introduce various techniques to integrate them into the base model. While these methods significantly enhance output quality, they typically require auxiliary encoders or fusion modules, complicating the model architecture and limiting training scalability. Among these studies, the most relevant to our work is Motion Prompting [13], as both employ point trajectories to represent motion guidance. However, we differ in two key aspects. First, Motion Prompting [13] encodes point tracks via random embeddings in pixel space, where the guidance is pixel-level threads that lack surrounding context to offer local control. We express point trajectories in latent space using the image feature, providing rich local information and finer control. Second, Motion Prompting [13] integrates motion guidance through a separate ControlNet [18], whereas we directly use the pretrained base model without architectural modifications, which facilitates its scalable fine-tuning. Sec. 5.3 provides quantitative and qualitative evidence of our advantages. + +For more specific robotic scenarios, works [60, 61] rely on pretrained DINOv2 features [62] to transfer object representations across frames for motion control in generated videos, yet both delivers DINOv2's limitation in representing motion signals. DINOv2 excels at high-level semantic encoding but lacks fine-grained object details. Thus, GEM [1] employs additional identity embeddings to distinguish objects and train an ObjectNet to bridge the domain gap between DINOv2 and the UNet's feature space. Moreover, the 14 patch size in DINOv2 may restrict the granularity of the proposed + +![](images/388cbc49cbb68d99106f672aad669bddd39e4ada586b4569f93dbaa6ba39d436.jpg) +(a) Latent feature replication +Figure 2: (a) To inject motion guidance, we transfer point trajectories from videos to latent space, then replicate the first frame feature into subsequent zero-padded frames along each latent trajectory. (b) Wan-Move is trained upon an existing image-to-video generation model (e.g., work [19, 29]), with an efficient latent feature replication step (as in (a)) to update the condition feature. The CLIP [65] image encoder and umT5 [66] text encoder from the base model are omitted for simplicity. + +![](images/bc96c0787707f89f9fd482a88eb20d9c8a6af0ad0db8e194f0b2421cb21245be.jpg) +(b) Training pipeline of Wan-Move + +control. In contrast, our approach mitigates these limitations by employing native VAE in I2V foundation models without relying on any auxiliary modules (e.g., identity embeddings). + +Benchmark for motion-controllable video generation. Most motion-controllable work evaluates on small, task-specific datasets, which are typically ad hoc, limited in scope, or insufficient for evaluating long-range dynamics and multi-object interactions. For example, datasets such as DAVIS [21] and VIPSeg [22] have been repurposed for trajectory control methods, yet their short clip durations and sparse annotations make them inadequate for assessing long-term consistency or complex interactions. While MagicBench [20] expands to 600 video clips, it categorizes samples solely by object count and relies on automatically generated labels from noisy pipelines, limiting the annotation precision. To address these limitations, we introduce MoveBench, a comprehensive benchmark for motion-controllable video generation. It includes carefully selected 1018 videos with extensive annotations, long-range dynamics, and 54 well-classified content patterns. + +# 3 Method + +# 3.1 Preliminary + +Video diffusion models [24, 19, 6, 63] apply Gaussian noise to clean data during the forward process and learn a reverse process to denoise and generate videos. To reduce computational costs, the denoising network typically operates on latent video representations obtained from a pretrained VAE [64]. Given an input video $\mathbf{V} \in \mathbb{R}^{(1 + T) \times H \times W \times 3}$ , the encoder $\mathcal{E}$ compresses both the temporal and spatial dimensions with compression ratios $f_{t}$ (temporal) and $f_{s}$ (spatial), while expanding the channel dimension to $C$ , yielding $\mathbf{x} = \mathcal{E}(\mathbf{V}) \in \mathbb{R}^{(1 + \frac{T}{f_{t}}) \times \frac{H}{f_{s}} \times \frac{W}{f_{s}} \times C}$ . The decoder $\mathcal{D}$ then reconstructs the video from the latent representation as $\hat{\mathbf{V}} = \mathcal{D}(\mathbf{x})$ . + +Our work focuses on motion-controllable image-to-video (I2V) generation, where models are required to generate motion-coherent videos based on the input first-frame image and motion trajectories. While the first frame will be encoded into the condition feature $\mathbf{z}_{\mathrm{image}}$ by the VAE, motion trajectories, which can be represented in diverse formats, remain in pixel space. Thus, the key challenge lies in effectively encoding motion trajectories into the condition feature $\mathbf{z}_{\mathrm{motion}}$ and injecting it into the generative model. To avoid the signal degradation and training difficulties associated with additional motion encoder and fusion modules, we aim to develop a motion-control framework that leverages existing I2V models without architectural modifications, as detailed below. + +# 3.2 Latent Trajectory Guidance + +To enable video generation conditioned on the first frame, an effective approach of popular I2V models [19, 29] concatenates the latent noise $\mathbf{x}_t$ and the first-frame condition feature $\mathbf{z}_{\mathrm{image}}$ along + +the channel dimension. $\mathbf{z}_{\mathrm{image}}$ is obtained by encoding the first frame $\mathbf{I}$ along with zero-padded subsequent frames $\mathbf{0}_{T\times H\times W\times C}$ using a pretrained VAE encoder $\mathcal{E}$ : + +$$ +\mathbf {z} _ {\text {i m a g e}} = \mathcal {E} \left(\operatorname {c o n c a t} \left[ \mathbf {I}, \mathbf {0} _ {T \times H \times W \times 3} \right]\right) \in \mathbb {R} ^ {\left(1 + \frac {T}{f _ {t}}\right) \times \frac {H}{f _ {s}} \times \frac {W}{f _ {s}} \times C}. \tag {1} +$$ + +For motion guidance representation, we adopt point trajectories, following prior studies [13, 14], as they provide fine-grained control and capture both local and global motion. Formally, a point trajectory of length $1 + T$ can be represented as $\mathbf{p} \in \mathbb{R}^{(1 + T) \times 2}$ , where $\mathbf{p}[n] = (x_n, y_n)$ specifies the trajectory location in the $n$ -th frame in the pixel space. Existing methods often employ auxiliary modules to encode and integrate trajectories into the backbone. Yet, this approach may degrade motion signals during motion encoding. In addition, training extra modules increases the complexity of fine-tuning the base model at scale. This raises a key question: Can we inject pixel-space motion guidance without auxiliary modules? + +Intuitively, I2V generation aims to animate the first frame, while motion trajectories specify object positions in each subsequent frame. Given the translation equivariance of VAE models, latent features at corresponding trajectory positions should closely resemble those in the first frame. Motivated by this, we propose encoding trajectories directly into latent space via spatial mapping, eliminating the need for an extra motion encoder: + +$$ +\tilde {\mathbf {p}} [ n ] = \left\{ \begin{array}{l l} \frac {\mathbf {p} [ n ]}{f _ {s}} & \text {i f} n = 0, \\ \frac {\sum_ {i = (n - 1) \cdot f _ {t} + 1} ^ {n \cdot f _ {t}} \mathbf {p} [ i ]}{f _ {t} \cdot f _ {s}} & 1 \leq n \leq \frac {T}{f _ {t}}, \end{array} \right. \tag {2} +$$ + +The latent trajectory position at the first frame is derived by spatial mapping, while for subsequent frames, it is averaged over each consecutive $f_{t}$ frames. This deterministically transforms pixel-space trajectories into latent space. To inject the obtained latent trajectories, we extract the latent features of the first frame at the initial trajectory point $\tilde{\mathbf{p}}[0]$ and replicate them across subsequent frames according to $\tilde{\mathbf{p}}$ , leveraging the translation equivariance of latent features, as shown in Fig. 2 (a): + +$$ +\mathbf {z} _ {\text {i m a g e}} [ n, \tilde {\mathbf {p}} [ n, 0 ], \tilde {\mathbf {p}} [ n, 1 ],: ] = \mathbf {z} _ {\text {i m a g e}} [ 0, \tilde {\mathbf {p}} [ 0, 0 ], \tilde {\mathbf {p}} [ 0, 1 ],: ] \quad \text {f o r} \quad n = 1, \dots , \frac {T}{f _ {t}}. \tag {3} +$$ + +Here, $\mathbf{z}_{\mathrm{image}}[t,h,w,:]$ denotes the feature vector at temporal index $t$ , height $h$ , and width $w$ . This operation efficiently injects motion guidance into the condition feature by updating $\mathbf{z}_{\mathrm{image}}$ , eliminating the need for explicit motion condition features and injection modules. An overview of the Wan-Move generation framework is presented in Fig. 2(b). When multiple visible point trajectories coincide at a given spatiotemporal position, we randomly select one trajectory's corresponding first-frame feature. + +# 3.3 Training and Inference + +Training data. We curate a high-quality training dataset, which undergoes rigorous two-stage filtering to ensure both visual quality and motion consistency. First, we manually annotate the visual quality of 1,000 samples and use them to train an expert scoring model for initial quality assessment. To further enhance temporal coherence, we introduce a motion quality filtering stage. Specifically, for each video, we extract SigLIP [67] features from the first frame and compute the mean SigLIP features for the remaining frames. The cosine similarity between these features serves as our stability metric. Based on empirical analysis of 10,000 samples, we establish a threshold to retain only videos where the content remains consistent with the initial frame. This two-stage pipeline produces a final dataset of 2 million high-quality 720p videos with strong visual quality and motion coherence. Additional details on the training data sources are provided in the supplementary material. + +Modeling training. Based on our training dataset, we use CoTracker [68] to track the trajectories of a dense $32 \times 32$ grid of points. For each training iteration, we sample $k$ trajectories from a mixed distribution: with $5\%$ probability, no trajectory is used ( $k = 0$ ); with $95\%$ probability, $k$ is uniformly sampled from 1 to 200. Notably, we retain a $5\%$ probability of dropping motion conditions, which effectively preserves the model's original image-to-video generation capability. For the selected trajectories, we extract the first-frame features and replicate them to subsequent zero-padded frames, as formalized by Eq. (3). Since CoTracker distinguishes between visible and + +![](images/1d2d967b9f016c8d9e8599b9b5a1bb3bfb21fce0ef02b8322c462616fc112f9e.jpg) +Figure 3: Construction pipeline of MoveBench to obtain high-quality samples with rich annotations. + +![](images/a0494994112cd6ebef0b3964698f6d7333c42eff83eff45deb667391f4569c36.jpg) +Figure 4: Balanced sample number per class. +Figure 5: Comparison with related benchmarks. + +
BenchmarkStatisticsTrajectory annotations
VideosFramesVideo categorizationMask trackPoint track
DAVIS5035-100××
VIPSeg (val)34324××
MagicBench60049××
MoveBench101881
+ +occluded point trajectories, we perform feature replication only along the visible trajectories. During training, the model parameters $\theta$ is initialized from the I2V model [19] and fine-tuned to predict the vector field $\mathbf{v}_t(\mathbf{x}_t)$ that transports samples from the noise distribution to the data distribution [69]: + +$$ +\mathcal {L} _ {\mathrm {F M}} (\theta) = \mathbb {E} _ {t, \mathbf {x} _ {t}, \mathbf {c}} \left[ \| \mathbf {v} _ {\theta} \left(\mathbf {x} _ {t}, t, \mathbf {c}\right) - \mathbf {v} _ {t} \left(\mathbf {x} _ {t}\right) \| ^ {2} \right], \tag {4} +$$ + +where $\mathbf{c}$ denotes the union of the generation condition. + +Inference with Wan-Move. The inference process closely resembles the original I2V model, with an additional latent feature replication operation. Specifically, Wan-Move conditions generation on three inputs: (1) a text prompt, (2) an input image as the first frame, and (3) sparse or dense point trajectories for motion control. Pretrained umT5 [66] and CLIP [65] models are employed to encode global context from the text prompt and first frame, respectively. The resulting image embedding $\mathbf{z}_{\mathrm{global}}$ and text embeddings $\mathbf{z}_{\mathrm{text}}$ are then injected into the DiT backbone via decoupled crossattention [18]. Additionally, a VAE is used to extract the first-frame condition feature $\mathbf{z}_{\mathrm{image}}$ , which will be injected through latent feature replication (as detailed in Sec. 3.2). Classifier-free guidance is applied to enhance alignment with conditional information. Formally, let unconditional vector field $\mathbf{v}_{\mathrm{uncond}} = \mathbf{v}_{\theta}(\mathbf{x}_t,t,\mathbf{z}_{\mathrm{image}},\mathbf{z}_{\mathrm{global}})$ , and conditional vector field $\mathbf{v}_{\mathrm{cond}} = \mathbf{v}_{\theta}(\mathbf{x}_t,t,\mathbf{z}_{\mathrm{image}},\mathbf{z}_{\mathrm{global}},\mathbf{z}_{\mathrm{text}})$ . The guided vector field $\tilde{\mathbf{v}}_{\theta}(\mathbf{x}_t,t,\mathbf{z}_{\mathrm{image}},\mathbf{z}_{\mathrm{global}},\mathbf{z}_{\mathrm{text}})$ is a weighted combination of the conditional and unconditional outputs, with the guidance scale $w$ : + +$$ +\tilde {\mathbf {v}} _ {\theta} \left(\mathbf {x} _ {t}, t, \mathbf {z} _ {\text {i m a g e}}, \mathbf {z} _ {\text {g l o b a l}}, \mathbf {z} _ {\text {t e x t}}\right) = \mathbf {v} _ {\text {u n c o n d}} + w \left(\mathbf {v} _ {\text {c o n d}} - \mathbf {v} _ {\text {u n c o n d}}\right) \tag {5} +$$ + +# 4 MoveBench + +Current benchmarks for motion-controllable video generation suffer from small scale, short duration, and lack precise, comprehensive motion annotations, thus introducing bias and limiting granularity. To address these gaps, we introduce MoveBench, a high-quality benchmark with 1018 videos (480×832 resolution, 5-second duration), designed for comprehensive evaluation of motion-controllable generation, as illustrated in Fig. 3-5. The evaluation videos are selected from Pexels [70], a large-scale, high-quality dataset containing about 400K videos, all released under a free license. + +MoveBench combines algorithmic curation with human expertise to ensure diverse, representative, and precisely annotated motion data. Compared to prior works, it offers three key features: (i) High quality. We curate videos through a rigorous four-stage pipeline, as illustrated in Fig. 3(a). We first utilize the expert scoring model obtained from Sec. 3.3 to score videos based on visual quality, filtering out low-quality content. Then, the selected videos are cropped to $480\mathrm{p}$ and uniformly sampled to 81 frames to ensure temporal consistency. Finally, videos are clustered into 54 content categories and we manually select the 15-25 most representative examples for each category, balancing diversity and quality (Fig. 4). (ii) Precise annotations. As shown in Fig. 5, we provide both point and mask annotations, so that methods using mask guidance signals can also be evaluated using our benchmark. + +Table 1: Performance comparisons on MoveBench and DAVIS. Wan-Move consistently yields substantial improvements in both visual fidelity and motion quality across all metrics. + +
MethodMoveBenchDAVIS
FID↓FVD↓PSNR↑SSIM↑EPE↓FID↓FVD↓PSNR↑SSIM↑EPE↓
ImageConductor [53]34.5424.013.40.4915.654.2513.611.60.4714.8
LeviTor [16]18.198.815.60.543.422.0115.413.30.513.7
Tora [14]22.5100.415.70.553.325.9129.213.70.493.5
MagicMotion [20]17.596.714.90.563.224.2113.412.80.533.5
Wan-Move (Ours)12.283.517.80.642.614.794.316.50.612.5
+ +![](images/5cda1f8861245ad02182f0a66bc4b576cd9d6c8900fdcd694ee312f925dd6a0f.jpg) + +![](images/c97ec16c207e6383648e6fe7d1729df4494ccb86701ec59e9b2562d995c4ac40.jpg) +Text prompt A close-up shot shows a person using an electric planer to smooth a wooden board. Both of his hands are gripping the planer. As the planer moves, wood shavings fly off the board. + +![](images/45762c229b96b01ab1b7d1c8e9657becc28b675b0aa636b85c09b2acaf58089f.jpg) + +![](images/bdbea9a915f3342c7c2e4873c61709ed910d0865feada0c6d1625789d3d19d08.jpg) + +![](images/20d8b3d5998ebdf97f2c970ff05f63ec24cda3feb875ff9cbac0909e3b0eb8d9.jpg) + +![](images/545196e6fce68013d00d3c21348aa33a85b3e8ab12a567a50370b68edf6f7e3c.jpg) + +![](images/176cae5f82ead12ea3372c3a2171341f0524fcae79a4f5fbb4356695ec37cce7.jpg) + +![](images/6f9ad79cdedbcf680ed656c671e684c908d875c8d10799eac9c840f63297a13e.jpg) + +![](images/80d3f44da639d617c837ef53d1d3f25218166f94941c72a38cb4c72286378412.jpg) + +![](images/feed5d5c9cdf0f8bc2343757a8c7fc52d1d59d9c43a26bd72f8d9e152fa57b76.jpg) + +![](images/b28400eaf0905d6b6dae9e8ebfd4b057d33e3b9c0d03313eaf8fa674528af5d4.jpg) +Figure 6: Qualitative comparisons between Wan-Move and recent approaches, including both academic methods [53, 16] and commercial solutions [4]. Motions that deviate from the specified trajectories and major visual artifacts are marked with red dots and boxes, respectively. + +![](images/0fa5dbdc77abb63425b60d1dae43729e9a95c5eddfc7e3963c687c9113008d79.jpg) + +![](images/3e90ee178e36a4e4f706782330e459b5875da14f0c87ed329fa81e492122be85.jpg) + +![](images/dff112038f1104fac56cc550c7db09ee3c96d453c06a80b164eaa098c64798c4.jpg) + +![](images/238b6145c641b87c28e7b285031be15e43b1bfd40b19c773c4c14d8589bfd1b9.jpg) +Text prompt +In a bedroom, a woman is sitting on the bed using a laptop. Next to her stands a man wearing a VR headset, holding a gun-shaped controller in each hand while playing a game. + +![](images/2339a6d355558238a8b95c1a9e5b5d38d15f61a49c55ffc56cda8d16a23b6546.jpg) + +![](images/85e012ff92ef94c20adc98af84c11883189777e8ea0b921b1a39995611e14a5f.jpg) + +![](images/36f05d99c6dfbf5483d3c15ef24ecd188a5570da8def7a246929605731bbf89e.jpg) + +![](images/075e4935211f3b4ec308fd9b3f22f4e0738ee8017508e30dd7889092bdf1a0e7.jpg) + +![](images/58a31d858e236cbb9149011f49fe0c8634d66f0cc708d2596de747a4d4e5da10.jpg) + +![](images/005e6a0ccc20d910d89f1b3a97daa95f19aa332f5a278545bd73fa6b27811d0f.jpg) + +![](images/91f1cd4a4a302afb857f0693f0e8a75af2d4cb7d7e2b7db41f5df97c3bd3979f.jpg) + +![](images/3eff93a9fac5432ff130aa86ed5c57e1459a3e24007217ca186071dba60789ef.jpg) + +![](images/3b757414c239e2e8284b66d61b04d7a8377a2d9ad70c6aad7a9ed9cdefd9fe44.jpg) + +![](images/f0cceab1ec192387dc102c0c2243f6b7aed6820d908dd290d39b021e4c797348.jpg) + +![](images/95ed0d1929be72cdccd2e5fc6dce3dae91c907b99d3f707dc32d16f8ecdc55ca.jpg) + +To precisely annotate motion regions, we design an interactive annotation interface as shown in Fig. 3. Annotators click on a target region in the first frame, prompting SAM [23] to generate an initial mask. When the mask exceeds the desired area, annotators add negative points to exclude irrelevant regions. This is critical for isolating articulated motions or small objects in cluttered scenes. After annotation, each video contains at least one point indicating a representative motion, with 192 videos additionally including multiple-object motion trajectories. (iii) Detailed captions. We use Gemini [71] to generate dense descriptions covering objects, actions, and camera dynamics. Unlike segmentation datasets like DAVIS [21], our captions are tailored for video generation tasks. + +# 5 Experiment + +# 5.1 Experimental Setup + +Wan-Move is implemented on top of Wan-I2V-14B [19], a state-of-the-art image-to-video (I2V) generation model. As described in Sec. 3.3, we fine-tune Wan-Move on a high-quality dataset consisting of 2M high-quality videos. Only the DiT backbone is trainable, while the image and text encoders remain frozen. During inference, we use a classifier-free guidance scale $w$ of 5.0 unless otherwise specified. Detailed training configurations are provided in the supplementary material. + +To quantitatively evaluate the fidelity of generated videos, we compute standard video quality metrics including FID [72], FVD [73], PSNR, and SSIM [74]. To assess motion accuracy, we measure the L2 distance between ground truth tracks and those estimated from generated videos, following [13] in denoting this metric as end-point error (EPE). All evaluations are performed at a resolution of $480\mathrm{p}$ + +# 5.2 Main Results + +Single-object motion control. We present an extensive comparison between Wan-Move and recent motion-controllable video generation methods [53, 16, 14, 20, 4]. Quantitative results on MoveBench and the public DAVIS [21] are shown in Table 1. Qualitative visualizations are presented in Fig. 6. Unlike other methods that rely on point tracks for motion guidance, MagicMotion [20] takes as input + +![](images/34a1fdfad5ded9bac2f933564f1d50364d038c977f878a45ebf2d0a4fc4f5130.jpg) +Figure 7: Qualitative comparisons with MagicMotion [20], controlling motions using sparse signals (i.e., bounding boxes) as input. Motions that break the guidance are marked with red dots. + +![](images/d628594124ab4378715ccf01b4c7c0f1b739a60ab6dbc77cd195b2746530ca28.jpg) + +Table 2: MoveBench multi-object motion results. + +
MethodFID↓FVD↓PSNR↑SSIM↑EPE↓
ImageConductor77.5764.513.90.519.8
Tora53.2350.014.50.543.5
Wan-Move (Ours)28.8226.316.70.622.2
+ +Table 3: Our win rates in 2AFC human study. + +
MethodMotion accuracyMotion qualityVisual quality
LeviTor98.298.098.8
Tora96.293.898.4
MagicMotion89.496.498.2
Kling 1.5 Pro47.853.450.2
+ +Table 4: Ablation on motion guidance strategies. + +
Motion guidanceFID↓FVD↓PSNR↑SSIM↑EPE↓
Pixel replication17.391.015.30.563.7
Random track embedding15.489.216.10.592.7
Latent feature replication12.283.517.80.642.6
+ +Table 5: Ablation on condition fusion strategies. + +
Cond. fusionFID↓FVD↓EPE↓Latency (s)
ControlNet12.484.62.5987 (+225)
Concat. (Ours)12.283.52.6765 (+3)
+ +sparse masks and bounding boxes. Since boxes can be directly derived from segmentation masks, they are inherently compatible with MoveBench that covering mask annotations. Hence, we also conduct comparisons using box-based inputs in Fig. 7. Among these methods, ImageConductor [53] exhibits poor performance in both image and motion quality, which can be attributed to its reliance on direct pixel-level trajectory injection, where single-pixel features lack sufficient semantic and texture information. The remaining methods report similar EPE (3.2-3.4), despite differing motion guidance approaches: Levitor [16] and MagicMotion [20] utilize the complex ControlNet [18], while Tora [14] adopts the lightweight adaLN [30]. Notably, our method achieves the best motion control performance (lowest EPE) and video quality (highest PSNR and SSIM) through latent trajectory replication without introducing additional parameters. This underscores the effectiveness of our latent trajectory guidance in adhering to motion constraints. Consistent results on the DAVIS dataset further validate the robustness of our approach. + +Multi-object motion control. As MoveBench includes 192 cases with annotated multi-object motion, we further evaluate Wan-Move against baselines [53, 14] on this challenging setting, as presented in Table 2. Our method achieves significantly lower FVD and reduced EPE compared to other methods, highlighting its precise adherence to motion constraints in more complex scenarios. + +Human study. We conduct a two-alternative forced-choice (2AFC) human evaluation comparing Wan-Move with SOTA approaches [14, 4, 16, 20]. Each method generated 50 conditioned samples, which are evaluated by 20 participants. The results, presented in Table 3, report Wan-Move's win rates across three metrics: motion accuracy, motion quality, and visual quality. Compared to Tora [14], Wan-Move achieves win rates exceeding $96\%$ in all categories. When evaluated against the commercial model Kling 1.5 Pro, our method demonstrates competitive performance, with superior win rates in motion quality. This narrows the gap between research-oriented and commercial models. + +# 5.3 Ablation Study + +Trajectory guidance strategy. We investigate the impact of motion guidance strategies on video + +quality and motion consistency. Quantitative and qualitative results as presented in Table 4 and Fig. 8, respectively. Pixel replication applies pixel-level copy-paste along the original trajectory, followed by VAE encoding. Yet, since single-pixel features contain limited semantic and texture information, the resulting motion control is weak, as reflected by a high EPE value of 3.7 and generation + +![](images/51e7f6fdf2536de1636985e42caceb4eed398f2d0ad68439d73322f0e2715ede.jpg) +Figure 8: Visualization of various guidance strategies. + +Table 6: Ablation on maximum number of point trajectories (see Sec. 3.3) during training. + +
NumberFID↓FVD↓PSNR↑SSIM↑EPE↓
N = 1012.886.617.60.623.3
N = 10012.984.717.70.652.7
N = 20012.283.517.80.642.6
N = 50013.383.917.60.633.0
N = 102413.483.717.20.613.9
+ +Table 7: Ablation on actual number of point trajectories during inference. + +
NumberFID↓FVD↓PSNR↑SSIM↑EPE↓
N=012.887.917.90.6412.4
N=112.283.517.80.642.6
N=1610.678.318.20.672.2
N=5127.751.020.30.751.5
N=10246.245.221.90.791.1
+ +Table 8: Ablation on different I2V backbones and training data scale. Wan-Move attains better results under the same setting. + +
MethodBackboneData scaleFID↓FVD↓PSNR↑SSIM↑EPE↓
MagicMotionCogVideoX-5B23K17.596.714.90.563.2
Wan-Move-Cog-23KCogVideoX-5B23K16.092.316.80.592.8
ToraCogVideoX-5B630K22.5100.415.70.553.3
Wan-Move-Cog-630KCogVideoX-5B630K14.187.317.20.612.8
Wan-MoveWan2.1-12V-14B2000K12.283.517.80.642.6
+ +Table 9: Large-motion and out-of-distribution-motion subset. + +
SubsetMethodFID↓FVD↓EPE↓
LargeTora29.1126.34.3
MagicMotion24.6119.34.1
Wan-Move14.586.63.0
OODTora28.9120.24.0
MagicMotion23.5115.73.9
Wan-Move13.586.02.8
+ +failures (see Fig. 8). The random track embedding approach, originally proposed for pixel space representations [13], is adapted to assign randomly initialized embeddings in latent space for injecting motion guidance. While effective for rigid single-region control, this approach fails to incorporate contextual information from surrounding regions, resulting in suboptimal video quality (lower PSNR and SSIM) and stiff motion near tracked points. For example, the hand moves, but surrounding bread remains static in Fig. 8. In contrast, our proposed latent feature replication method achieves superior video quality (highest PSNR of 17.8) and precise motion control (lowest EPE of 2.6). + +Condition fusion strategy. We compare different motion condition approaches, namely ControlNet [18] and direct concatenation (our approach). The results are presented in Table 5. Notably, simple concatenation of motion conditions with input noise achieves performance comparable to ControlNet in motion-controllable generation. Yet, ControlNet introduces significant additional modules, substantially increasing inference latency by 225 seconds over the original I2V model. In contrast, Wan-Move preserves the base model architecture and only adds a one-time trajectory extraction process, increasing just 3-second inference time. + +Number of point trajectories during training. Table 6 evaluates the impact of the maximum number of point tracks $(N)$ during training. As $N$ increases from 10 to 200, the model's motion-following capability improves progressively, evidenced by the decreasing EPE. The optimal performance, in terms of both structural similarity (SSIM) and EPE, is achieved at $N = 200$ . However, further increasing the number of point tracks leads to a rise in EPE. This can be attributed to the mismatch between the dense point tracks in the training and the sparse point tracks during evaluation. + +# Number of point tracks during inference. + +Table 12 ablates the performance of Wan-Move across varying numbers of point trajectories over MoveBench. As the number of tracks increases, EPE drops significantly, indicating better motion guidance and enhanced temporal coherence. When reaching the maximum number of point trajectories extracted by CoTracker, Wan-Move achieves the lowest EPE of 1.1. Though it is trained with at most 200 tracks, the model shows strong generalization capability. Notably, + +naive I2V inference (with no point tracks) yields PSNR and SSIM scores comparable to motion-controlled generation, confirming that our model strongly retains its inherent I2V quality. Naive I2V samples generated by Wan-Move are presented in Fig. 9. + +Backbones and data scale. In pursuit of the best generation quality, we initially train Wan-Move with a large-scale dataset and a strong backbone. To ensure a fair comparison with the leading approaches MagicMotion and Tora, we align Wan-Move's backbone and training data scale with them. This yields two variants, i.e., Wan-Move-Cog-23K and Wan-Move-Cog-630K, which are trained on 23K and 630K data samples respectively, using CogVideoX1.5-5B-I2V [47] as backbone. The + +![](images/71f967b4ef958bfbf0fc45ca9c612d1e6079e1c94ae82b0a1106c6951776fab9.jpg) +A group of ants is busily moving in and out of the nest. Their bodies display alternating patterns of brown and black. + +![](images/385bb34f97143a28b13dabf21df6213f8f1ab922b09ca77f998962465a8ce693.jpg) + +![](images/564f244057d9debbc32d54d854ac25a1a09802fc77debe217d917af6648f2d01.jpg) + +Figure 9: I2V results of Wan-Move (no point tracks). +![](images/e5c614d1d58ed69a1bfdb6a58dd8cc208719768fc5be6ac9e456e0ffd52cfb7e.jpg) +On a wooden table, a piece of brown burlap is unfolded, revealing a bouquet placed on pink paper. The bouquet consists of layers of purple flowers, pink flowers, and white blossoms. + +![](images/5bae723b6d3de85e163e5ebe2ced90c90d35909866999ad6252707e6fd6823e0.jpg) + +![](images/319de8e40f12a3f59ee930424806857ee3c388474d9cfc74edb87f5eacb1977e.jpg) + +detailed comparison on MoveBench is shown in Table 8. Under the same backbone and data scale, Wan-Move still outperforms these two powerful methods. + +Evaluation on large-motion and out-of-distribution motion scenarios. To further verify the model generalizability, we curate subsets from MoveBench containing high-amplitude and out-of-distribution motion control cases. For each video, its motion amplitude score is computed as the average of the top $2\%$ largest optical flow values extracted by RAFT [75]. The top $20\%$ highest-score videos are selected as large-motion videos. Besides, we manually curate 50 uncommon motion cases as out-of-distribution subset, including complex foreground-background interactions, objects moving out of frame, and rare camera motions. Evaluation results on these challenging examples are shown in Table 9. Notably, Wan-Move consistently outperforms two leading baselines, with performance gaps further widening under these difficult condition. In addition, Wan-Move's performance only marginally drops compared to its results on the full benchmark, demonstrating its robustness. + +# 5.4 Motion Control Applications + +As point trajectories can flexibly represent various types of motion, Wan-Move supports a wide range of motion control applications, as showcased in Fig. 1. First, rows 1-2 show object control using single or multiple point trajectories. For camera control (row 3), we can either drag background elements directly or follow the approach of work [13]. The latter estimates a point cloud from a monocular depth predictor [76], projects it along a camera pose trajectory, and applies z-buffering to obtain camera-aligned 2D trajectories. Following work [13], we perform primitive-level control by rotating a virtual sphere to generate projected 2D trajectories for globe motion (row 4). In row 5, we enable motion transfer by applying trajectories extracted from one video to update the condition features of a different image. Row 6 shows 3D rotation control by estimating depth-based positions, applying a rotation, and projecting the results to 2D. We refer readers to the supplementary file for more visualizations and full videos. + +# 6 Conclusion and Discussion + +We propose Wan-Move, a simple and scalable framework for precise motion control in video generation. It represents motion with point trajectories and transfers them into latent coordinates through spatial mapping, requiring no extra motion encoder. We then inject trajectory guidance into first-frame condition features via latent feature replication, achieving effective motion control without architectural changes. For rigorous evaluation, we further present MoveBench, a comprehensive and well-curated benchmark featuring diverse content categories with hybrid-verified annotations. Extensive experiments on MoveBench and public datasets show that Wan-Move generates high-quality, long-duration (5s, 480p) videos with motion controllability on par with commercial tools like Kling 1.5 Pro's Motion Brush. We believe our open-sourced solution offers an efficient path to scale motion-controllable video generation and will empower a wide range of creators. + +Limitations and broader impacts. Wan-Move uses point trajectories to guide motion, which can be unreliable when tracks are missing due to occlusion. While we observe that short-term occlusions can be recovered once the point reappears, showing a degree of generalization, prolonged absence may lead to loss of control (see Appendix). As with other generative models, Wan-Move carries dual-use potential. Its ability to produce realistic, controllable videos can benefit creative industries, education, and simulation, but also risks misuse for generating misleading or harmful content. + +# 7 Acknowledgment + +This work was supported by the National Natural Science Foundation of China (Grant No. 62576191). + +# References + +[1] Weijia Wu, Zhuang Li, Yuchao Gu, Rui Zhao, Yefei He, David Junhao Zhang, Mike Zheng Shou, Yan Li, Tingting Gao, and Di Zhang. Draganything: Motion control for anything using entity representation. In ECCV, 2024. +[2] Shengming Yin, Chenfei Wu, Jian Liang, Jie Shi, Houqiang Li, Gong Ming, and Nan Duan. Dragnuwa: Fine-grained control in video generation by integrating text, image, and trajectory. arXiv:2308.08089, 2023. +[3] Ryan Burgert, Yuancheng Xu, Wenqi Xian, Oliver Pilarski, Pascal Clausen, Mingming He, Li Ma, Yitong Deng, Lingxiao Li, Mohsen Mousavi, Michael Ryoo, Paul Debevec, and Ning Yu. Go-with-the-flow: Motion-controllable video diffusion models using real-time warped noise. In CVPR, 2025. +[4] Kuaishou. Kling ai. https://klingai.kuaishou.com, 2024.06. +[5] Runway. Gen-3. https://runwayml.com, 2024.06. +[6] Weijie Kong, Qi Tian, Zijian Zhang, Rox Min, Zuozhuo Dai, Jin Zhou, Jiangfeng Xiong, Xin Li, Bo Wu, Jianwei Zhang, et al. Hunyuanvideo: A systematic framework for large video generative models. arXiv:2412.03603, 2024. +[7] Jianzong Wu, Xiangtai Li, Yanhong Zeng, Jiangning Zhang, Qianyu Zhou, Yining Li, Yunhai Tong, and Kai Chen. Motionbooth: Motion-aware customized text-to-video generation. NeurIPS, 2024. +[8] Jiawei Wang, Yuchen Zhang, Jiaxin Zou, Yan Zeng, Guoqiang Wei, Liping Yuan, and Hang Li. Boximator: Generating rich and controllable motions for video synthesis. arXiv:2402.01566, 2024. +[9] Zuozhuo Dai, Zhenghao Zhang, Yao Yao, Bingxue Qiu, Siyu Zhu, Long Qin, and Weizhi Wang. Animateanything: Fine-grained open domain image animation with motion guidance. arXiv:2311.12886, 2023. +[10] Haitao Zhou, Chuang Wang, Rui Nie, Jinlin Liu, Dongdong Yu, Qian Yu, and Changhu Wang. Trackgo: A flexible and efficient method for controllable video generation. In AAAI, 2025. +[11] Mathis Koroglu, Hugo Caselles-Dupré, Guillaume Jeanneret Sanmiguel, and Matthieu Cord. Onlyflow: Optical flow based motion conditioning for video diffusion models. arXiv:2411.10501, 2024. +[12] Xiaoyu Shi, Zhaoyang Huang, Fu-Yun Wang, Weikang Bian, Dasong Li, Yi Zhang, Manyuan Zhang, Ka Chun Cheung, Simon See, Hongwei Qin, et al. Motion-i2v: Consistent and controllable image-to-video generation with explicit motion modeling. In SIGGRAPH, 2024. +[13] Daniel Geng, Charles Herrmann, Junhwa Hur, Forrester Cole, Serena Zhang, Tobias Pfaff, Tatiana Lopez-Guevara, Carl Doersch, Yusuf Aytar, Michael Rubinstein, et al. Motion prompting: Controlling video generation with motion trajectories. arXiv:2412.02700, 2024. +[14] Zhenghao Zhang, Junchao Liao, Menghao Li, Zuozhuo Dai, Bingxue Qiu, Siyu Zhu, Long Qin, and Weizhi Wang. Tora: Trajectory-oriented diffusion transformer for video generation. arXiv:2407.21705, 2024. +[15] Zhouxia Wang, Ziyang Yuan, Xintao Wang, Yaowei Li, Tianshui Chen, Menghan Xia, Ping Luo, and Ying Shan. Motionctrl: A unified and flexible motion controller for video generation. In SIGGRAPH, 2024. + +[16] Hanlin Wang, Hao Ouyang, Qiuyu Wang, Wen Wang, Ka Leong Cheng, Qifeng Chen, Yujun Shen, and Limin Wang. Levitor: 3d trajectory oriented image-to-video synthesis. arXiv:2412.15214, 2024. +[17] Yuqing Chen, Junjie Wang, Lin Liu, Ruihang Chu, Xiaopeng Zhang, Qi Tian, and Yujiu Yang. O-disco-edit: Object distortion control for unified realistic video editing. arXiv preprint arXiv:2509.01596, 2025. +[18] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In ICCV, 2023. +[19] Team Wan, Ang Wang, Baole Ai, Bin Wen, Chaojie Mao, Chen-Wei Xie, Di Chen, Feiwu Yu, Haiming Zhao, Jianxiao Yang, et al. Wan: Open and advanced large-scale video generative models. arXiv preprint arXiv:2503.20314, 2025. +[20] Quanhao Li, Zhen Xing, Rui Wang, Hui Zhang, Qi Dai, and Zuxuan Wu. Magicmotion: Controllable video generation with dense-to-sparse trajectory guidance. arXiv:2503.16421, 2025. +[21] Jordi Pont-Tuset, Federico Perazzi, Sergi Caelles, Pablo Arbeláez, Alex Sorkine-Hornung, and Luc Van Gool. The 2017 davis challenge on video object segmentation. arXiv:1704.00675, 2017. +[22] Jiaxu Miao, Xiaohan Wang, Yu Wu, Wei Li, Xu Zhang, Yunchao Wei, and Yi Yang. Large-scale video panoptic segmentation in the wild: A benchmark. In CVPR, 2022. +[23] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dolkar, and Ross Girshick. Segment Anything. In ICCV, 2023. +[24] Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J. Fleet. Video diffusion models. arXiv:2204.03458, 2022. +[25] Jonathan Ho, William Chan, Chitwan Sahara, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P. Kingma, Ben Poole, Mohammad Norouzi, David J. Fleet, and Tim Salimans. Imagen video: High definition video generation with diffusion models. arXiv:2210.02303, 2022. +[26] Ruben Villegas, Mohammad Babaeizadeh, Pieter-Jan Kindermans, Hernan Moraldo, Han Zhang, Mohammad Taghi Saffar, Santiago Castro, Julius Kunze, and Dumitru Erhan. Phenaki: Variable length video generation from open domain textual description. arXiv:2210.02399, 2022. +[27] Haoxin Chen, Yong Zhang, Xiaodong Cun, Menghan Xia, Xintao Wang, Chao Weng, and Ying Shan. Videocaster2: Overcoming data limitations for high-quality video diffusion models. In CVPR, 2024. +[28] Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, and Shiwei Zhang. Modelscope text-to-video technical report. arXiv:2308.06571, 2023. +[29] Omer Bar-Tal, Hila Chefer, Omer Tov, Charles Herrmann, Roni Paiss, Shiran Zada, Ariel Ephrat, Junhwa Hur, Guanghui Liu, Amit Raj, et al. Lumiere: A space-time diffusion model for video generation. In SIGGRAPH Asia, 2024. +[30] William Peebles and Saining Xie. Scalable diffusion models with transformers. In ICCV, 2023. +[31] Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, and Jie Tang. Cogvideo: Large-scale pretraining for text-to-video generation via transformers. arXiv:2205.15868, 2022. +[32] Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, et al. Pixart-α: Fast training of diffusion transformer for photorealistic text-to-image synthesis. In ICLR, 2024. +[33] Yang Jin, Zhicheng Sun, Ningyuan Li, Kun Xu, Kun Xu, Hao Jiang, Nan Zhuang, Quzhe Huang, Yang Song, Yadong Mu, and Zhouchen Lin. Pyramidal flow matching for efficient video generative modeling. In ICLR, 2025. + +[34] Yoav HaCohen, Nisan Chiprut, Benny Brazowski, Daniel Shalem, Dudu Moshe, Eitan Richardson, Eran Levin, Guy Shiran, Nir Zabari, Ori Gordon, Poriya Panet, Sapir Weissbuch, Victor Kulikov, Yaki Bitterman, Zeev Melumian, and Ofir Bibi. Ltx-video: Realtime video latent diffusion. arXiv:2501.00103, 2024. +[35] Xin Ma, Yaohui Wang, Gengyun Jia, Xinyuan Chen, Ziwei Liu, Yuan-Fang Li, Cunjian Chen, and Yu Qiao. Latte: Latent diffusion transformer for video generation. TMLR, 2025. +[36] Bin Lin, Yunyang Ge, Xinhua Cheng, Zongjian Li, Bin Zhu, Shaodong Wang, Xianyi He, Yang Ye, Shenghai Yuan, Liuhan Chen, et al. Open-sora plan: Open-source large video generation model. arXiv:2412.00131, 2024. +[37] OpenAI. Video generation models as world simulators, 2024. +[38] Adam Polyak, Amit Zohar, Andrew Brown, Andros Tjandra, Animesh Sinha, Ann Lee, Apoorv Vyas, Bowen Shi, Chih-Yao Ma, Ching-Yao Chuang, et al. Movie gen: A cast of media foundation models. arXiv:2410.13720, 2024. +[39] Fan Bao, Chengdong Xiang, Gang Yue, Guande He, Hongzhou Zhu, Kaiwen Zheng, Min Zhao, Shilong Liu, Yaole Wang, and Jun Zhu. Vidu: a highly consistent, dynamic and skilled text-to-video generator with diffusion models. arXiv:2405.04233, 2024. +[40] MiniMax. Hailuo ai. https://hailuoai.com/video, 2024.09. +[41] Zangwei Zheng, Xiangyu Peng, Tianji Yang, Chenhui Shen, Shenggui Li, Hongxin Liu, Yukun Zhou, Tianyi Li, and Yang You. Open-sora: Democratizing efficient video production for all, March 2024. +[42] GenmoTeam. Mochi 1. https://github.com/genmoai/models, 2024. +[43] Guoqing Ma, Haoyang Huang, Kun Yan, Liangyu Chen, Nan Duan, Shengming Yin, Changyi Wan, Ranchen Ming, Xiaoniu Song, Xing Chen, et al. Step-video-t2v technical report: The practice, challenges, and future of video foundation model. arXiv:2502.10248, 2025. +[44] Junsong Chen, Yuyang Zhao, Jincheng Yu, Ruihang Chu, Junyu Chen, Shuai Yang, Xianbang Wang, Yicheng Pan, Daquan Zhou, Huan Ling, et al. Sana-video: Efficient video generation with block linear diffusion transformer. arXiv preprint arXiv:2509.24695, 2025. +[45] Shuai Yang, Wei Huang, Ruihang Chu, Yicheng Xiao, Yuyang Zhao, Xianbang Wang, Muyang Li, Enze Xie, Yingcong Chen, Yao Lu, et al. Longlive: Real-time interactive long video generation. arXiv preprint arXiv:2509.22622, 2025. +[46] Shentong Mo, Enze Xie, Ruihang Chu, Lanqing Hong, Matthias Niessner, and Zhenguo Li. Dit-3d: Exploring plain diffusion transformers for 3d shape generation. Advances in neural information processing systems, 36:67960–67971, 2023. +[47] Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, Da Yin, Xiaotao Gu, Yuxuan Zhang, Weihan Wang, Yean Cheng, Ting Liu, Bin Xu, Yuxiao Dong, and Jie Tang. CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer. In ICLR, 2025. +[48] Koichi Namekata, Sherwin Bahmani, Ziyi Wu, Yash Kant, Igor Gilitschenski, and David B Lindell. Sg-i2v: Self-guided trajectory control in image-to-video generation. arXiv:2411.04989, 2024. +[49] Haonan Qiu, Zhaoxi Chen, Zhouxia Wang, Yingqing He, Menghan Xia, and Ziwei Liu. Freetraj: Tuning-free trajectory control in video diffusion models. arXiv:2406.16863, 2024. +[50] Wan-Duo Kurt Ma, John P Lewis, and W Bastiaan Kleijn. Trailblazer: Trajectory control for diffusion-based video generation. In SIGGRAPH Asia, 2024. +[51] Yash Jain, Anshul Nasery, Vibhav Vineet, and Harkirat Behl. Peekaboo: Interactive video generation via masked-diffusion. In CVPR, 2024. + +[52] Xiang Wang, Hangjie Yuan, Shiwei Zhang, Dayou Chen, Jiuniu Wang, Yingya Zhang, Yujun Shen, Deli Zhao, and Jingren Zhou. VideoComposer: Compositional video synthesis with motion controllability. In NeurIPS, 2023. +[53] Yaowei Li, Xintao Wang, Zhaoyang Zhang, Zhouxia Wang, Ziyang Yuan, Liangbin Xie, Ying Shan, and Yuexian Zou. Image conductor: Precision control for interactive video synthesis. In AAAI, 2025. +[54] Xiao Fu, Xian Liu, Xintao Wang, Sida Peng, Menghan Xia, Xiaoyu Shi, Ziyang Yuan, Pengfei Wan, Di Zhang, and Dahua Lin. 3dtrajmaster: Mastering 3d trajectory for multi-entity motion in video generation. In ICLR, 2024. +[55] Qinghe Wang, Yawen Luo, Xiaoyu Shi, Xu Jia, Hutchuan Lu, Tianfan Xue, Xintao Wang, Pengfei Wan, Di Zhang, and Kun Gai. Cinemaster: A 3d-aware and controllable framework for cinematic text-to-video generation. arXiv:2502.08639, 2025. +[56] Zekai Gu, Rui Yan, Jiahao Lu, Peng Li, Zhiyang Dou, Chenyang Si, Zhen Dong, Qifeng Liu, Cheng Lin, Ziwei Liu, et al. Diffusion as shader: 3d-aware video diffusion for versatile video generation control. arXiv:2501.03847, 2025. +[57] Yingjie Chen, Yifang Men, Yuan Yao, Miaomiao Cui, and Liefeng Bo. Perception-as-control: Fine-grained controllable image animation with 3d-aware motion representation. arXiv:2501.05020, 2025. +[58] Xiang Wang, Shiwei Zhang, Haonan Qiu, Ruihang Chu, Zekun Li, Yingya Zhang, Changxin Gao, Yuehuan Wang, Chunhua Shen, and Nong Sang. Replace anyone in videos. arXiv preprint arXiv:2409.19911, 2024. +[59] Bin Xia, Jiyang Liu, Yuechen Zhang, Bohao Peng, Ruihang Chu, Yitong Wang, Xinglong Wu, Bei Yu, and Jiaya Jia. Dreamve: Unified instruction-based image and video editing. arXiv preprint arXiv:2508.06080, 2025. +[60] Mariam Hassan, Sebastian Stapf, Ahmad Rahimi, Pedro Rezende, Yasaman Haghighi, David Brüggemann, Isinsu Katircioglu, Lin Zhang, Xiaoran Chen, Suman Saha, et al. Gem: A generalizable ego-vision multimodal world model for fine-grained ego-motion, object dynamics, and scene composition control. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 22404-22415, 2025. +[61] Aram Davtyan, Sepehr Sameni, Björn Ommer, and Paolo Favaro. Cage: Unsupervised visual composition and animation for controllable video generation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 16163-16171, 2025. +[62] Maxime Oquab, Timothee Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. +[63] Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv:2311.15127, 2023. +[64] Diederik P Kingma, Max Welling, et al. Auto-encoding variational bayes, 2013. +[65] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021. +[66] Hyung Won Chung, Noah Constant, Xavier Garcia, Adam Roberts, Yi Tay, Sharan Narang, and Orhan First. Unimax: Fairer and more effective language sampling for large-scale multilingual pretraining. arXiv:2304.09151, 2023. +[67] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In ICCV, 2023. + +[68] Nikita Karaev, Ignacio Rocco, Benjamin Graham, Natalia Neverova, Andrea Vedaldi, and Christian Rupprecht. Cotracker: It is better to track together. In ECCV, 2024. +[69] Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling. arXiv:2210.02747, 2022. +[70] JovianZM. Pexels-400k dataset. https://huggingface.co/datasets/jovianzm/Pexels-400k, 2024. +[71] Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. Gemini: a family of highly capable multimodal models. arXiv:2312.11805, 2023. +[72] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NeurIPS, 2017. +[73] Thomas Unterthiner, Sjoerd Van Steenkiste, Karol Kurach, Raphal Marinier, Marcin Michalski, and Sylvain Gelly. Fvd: A new metric for video generation. In ICLR, 2019. +[74] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. TIP, 2004. +[75] Zachary Teed and Jia Deng. Raft: Recurrent all-pairs field transforms for optical flow. In European conference on computer vision, pages 402-419. Springer, 2020. +[76] Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao. Depth anything v2. In NeurIPS, 2024. +[77] Tsai-Shien Chen, Aliaksandr Siarohin, Willi Menapace, Ekaterina Deyneca, Hsiang-wei Chao, Byung Eun Jeon, Yuwei Fang, Hsin-Ying Lee, Jian Ren, Ming-Hsuan Yang, and Sergey Tulyakov. Panda-70m: Captioning 70m videos with multiple cross-modality teachers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024. +[78] Pixabay. Discover and download free videos - pixabay, 2025. +[79] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-vl technical report. arXiv preprint arXiv:2502.13923, 2025. +[80] PyTorch Team. Pytorch fully sharded data parallel (fsdp). https://pytorch.org/docs/stable/fsdp.html, 2021. Accessed: [Insert Date]. +[81] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2017. +[82] Sam Ade Jacobs, Masahiro Tanaka, Chengming Zhang, Minjia Zhang, Shuaiwen Leon Song, Samyam Rajbhandari, and Yuxiong He. Deepspeed ulysses: System optimizations for enabling training of extreme long sequence transformer models. arXiv preprint arXiv:2309.14509, 2023. +[83] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276, 2024. + +# NeurIPS Paper Checklist + +# 1. Claims + +Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? + +Answer: [Yes] + +Justification: In the abstract and introduction, we detail our task objectives, weaknesses in the current methods, and the improvements we propose for these weaknesses. + +Guidelines: + +- The answer NA means that the abstract and introduction do not include the claims made in the paper. +- The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. +- The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. +- It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. + +# 2. Limitations + +Question: Does the paper discuss the limitations of the work performed by the authors? + +Answer: [Yes] + +Justification: In the end part of our main paper, we discuss the limitations of this paper. + +Guidelines: + +- The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. +- The authors are encouraged to create a separate "Limitations" section in their paper. +- The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. +- The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. +- The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. +- The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. +- If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. +- While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. + +# 3. Theory assumptions and proofs + +Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? + +Answer: [NA] + +Justification: The paper does not include theoretical results. + +# Guidelines: + +- The answer NA means that the paper does not include theoretical results. +- All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced. +- All assumptions should be clearly stated or referenced in the statement of any theorems. +- The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. +- Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. +- Theorems and Lemmas that the proof relies upon should be properly referenced. + +# 4. Experimental result reproducibility + +Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? + +# Answer: [Yes] + +Justification: We fully disclose all the experimental information in Section 5.1 and supplementary material. + +# Guidelines: + +- The answer NA means that the paper does not include experiments. +- If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. +- If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. +- Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. +- While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example +(a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. +(b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. +(c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). +(d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. + +# 5. Open access to data and code + +Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? + +# Answer: [Yes] + +Justification: The code will be open-sourced upon acceptance. + +# Guidelines: + +- The answer NA means that paper does not include experiments requiring code. +- Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details. +- While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). +- The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details. +- The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. +- The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. +- At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). +- Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. + +# 6. Experimental setting/details + +Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? + +# Answer: [Yes] + +Justification: We specify the training and test details in Section 5.1 and supplementary material. + +# Guidelines: + +- The answer NA means that the paper does not include experiments. +- The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. +- The full details can be provided either with the code, in appendix, or as supplemental material. + +# 7. Experiment statistical significance + +Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? + +# Answer: [NA] + +Justification: For motion-controllable video generation tasks, the error bar is uncommon, and there is currently no mature error evaluation system in this community. + +# Guidelines: + +- The answer NA means that the paper does not include experiments. +- The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. +- The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). +- The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) +- The assumptions made should be given (e.g., Normally distributed errors). + +- It should be clear whether the error bar is the standard deviation or the standard error of the mean. +- It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified. +- For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). +- If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. + +# 8. Experiments compute resources + +Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? + +Answer: [Yes] + +Justification: We provide training details in the supplementary material. + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. +- The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. +- The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). + +# 9. Code of ethics + +Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines? + +Answer: [Yes] + +Justification: We have carefully reviewed the NeurIPS Code of Ethics to ensure that our submission complies with all regulations. + +Guidelines: + +- The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. +- If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. +- The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). + +# 10. Broader impacts + +Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? + +Answer: [Yes] + +Justification: We have discussed broader impacts in Section 6. + +Guidelines: + +- The answer NA means that there is no societal impact of the work performed. +- If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. +- Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. + +- The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. +- The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. +- If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). + +# 11. Safeguards + +Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? + +Answer: [NA] + +Justification: The pre-trained model used in our paper is an image-to-video model, and the content of the generated video is specified by the input image. The paper poses no such risks. + +Guidelines: + +- The answer NA means that the paper poses no such risks. +- Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. +- Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. +- We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. + +# 12. Licenses for existing assets + +Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? + +Answer: [Yes] + +Justification: The models and data used in this paper are open-sourced and authorized. + +Guidelines: + +- The answer NA means that the paper does not use existing assets. +- The authors should cite the original paper that produced the code package or dataset. +- The authors should state which version of the asset is used and, if possible, include a URL. +- The name of the license (e.g., CC-BY 4.0) should be included for each asset. +- For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. +- If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. + +- For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. +- If this information is not available online, the authors are encouraged to reach out to the asset's creators. + +# 13. New assets + +Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? + +Answer: [NA] + +Justification: The paper does not release new assets. + +Guidelines: + +- The answer NA means that the paper does not release new assets. +- Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. +- The paper should discuss whether and how consent was obtained from people whose asset is used. +- At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. + +# 14. Crowdsourcing and research with human subjects + +Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? + +Answer: [NA] + +Justification: The paper does not involve crowdsourcing nor research with human subjects. + +Guidelines: + +- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. +- Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. +- According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. + +# 15. Institutional review board (IRB) approvals or equivalent for research with human subjects + +Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? + +Answer: [NA] + +Justification: The paper does not involve crowdsourcing nor research with human subjects. + +Guidelines: + +- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. +- Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. +- We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. +- For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. + +# 16. Declaration of LLM usage + +Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required. + +Answer: [NA] + +Justification: The paper does not involve crowdsourcing nor research with human subjects. + +Guidelines: + +- The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components. +- Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described. + +# Wan-Move: Motion-controllable Video Generation via Latent Trajectory Guidance + +# Supplementary Material + +# Contents + +# 8 Implementation Details 23 + +8.1 Training Data Details 23 +8.2 MoveBench Construction Details 23 +8.3 Training and Inference Configuration 24 + +# 9 Additional Experiments 25 + +9.1 Choice of Feature Replication Strategies under Trajectory Overlap 25 +9.2 Choice of Different Training Strategies 25 +9.3 Model Performance Under Trajectory Disappearance 25 +9.4 Failure Cases 25 + +# 10 Qualitative Visualizations 26 + +10.1 More Qualitative Comparisons 26 +10.2 More Camera Control Results 26 +10.3 More Motion Transfer Results 26 +10.4 More 3D Rotation Results 27 + +# 8 Implementation Details + +# 8.1 Training Data Details + +Table 10 presents the composition of the filtered training datasets, which are sourced from Panda70M [77], Pixabay [78], Pexels [70], and YouTube. YouTube videos are independently collected for this study. To prevent data leakage, the videos from Pexels are strictly separated from those in the proposed MoveBench. + +All videos for training are captioned using Qwen2.5-VL [79], with the prompt structure illustrated in Fig. 10. This prompt emphasizes motion and camera attributes while preserving the fundamental scene descriptions, ensuring that the model semantically understands the context and generates physically plausible motions. The same captioning prompt is applied to the videos in MoveBench. + +# 8.2 MoveBench Construction Details + +Video content clustering. Following the initial filtering stage, we conduct a rigorous content clustering process to ensure broad scenario coverage in our benchmark. Specifically, we sample 16 frames per filtered video and compute the average of their SigLip [67] features. Using k-means clustering, we group these features into 54 distinct content categories. Each category label, e.g., Tennis, is then automatically captioned using Qwen2.5-VL [79]. Finally, we manually select the 15-25 most representative videos per category to maintain a balance of diversity. + +Interactive labeling. Existing models often fail to accurately identify representative motion regions in videos, as the most prominent motion may not be optimal, and many motions terminate prematurely. + +Table 10: The statistics of the training datasets. + +
Dataset sourceNumberCaptioner
Panda70M [77]0.56MQwen2.5-VL
Pixabay [78]0.42MQwen2.5-VL
Pexels [70]0.25MQwen2.5-VL
YouTube0.75MQwen2.5-VL
+ +![](images/8e1f1ce07b5d5d28a06575bbc88ce551756208ec92738774723609758d4b0d88.jpg) +Figure 10: Prompt for video caption. + +![](images/79450e1fc2fbdc696f9fefa3fe590e78010a98b21ad3077012ecf667247bc937.jpg) +SAM2 Annotations! +Figure 11: The interactive annotation interface displays the video (left) and its first frame (right). Users click green positive points to specify the start point of a motion trajectory, and red negative points to exclude irrelevant regions if needed. SAM segments the mask of moving objects & regions for user review. To annotate multiple motion trajectories, users must assign different object IDs. + +To facilitate precise annotation of motion regions, we introduce an interactive labeling interface (Fig. 11) for selecting the initial motion point and its corresponding mask in the first frame. Annotators begin by selecting a target point in the initial frame, prompting SAM [23] to generate a preliminary segmentation mask. If the mask extends beyond the desired area, negative points can be added to exclude irrelevant regions. This method effectively isolates articulated motions or small objects. For subsequent frames, point trajectories are automatically extracted using CoTracker [68]. + +# 8.3 Training and Inference Configuration + +As illustrated in Sec. 5.1 of the main paper, we employ Wan-I2V-14B [19] as the base I2V model. During training, both the DiT and umT5 components of Wan are wrapped with Fully Sharded Data Parallel (FSDP) [80], with parameters cast to torch.bfloat16 for memory efficiency. The training employs the AdamW optimizer [81] with a weight decay of 1e-3 and a base learning rate of 5e-6. The first 2,000 steps are used for linear warm-up to enable a smooth transition from the initial I2V generation (corresponding to 0 point trajectories) to motion-controllable video generation. We adopt flow matching objective for optimization, where the number of time sampling steps is set to 1,000 during training. To enable large-scale training with long sequences (e.g., 5s video clip), we adopt the Ulysses sequence parallelism strategy [82] following Wan, setting the sequence parallel size to 4. We train our model using 64 NVIDIA A100 GPUs, with each GPU processing a quarter of + +Table 11: Impact of feature replication strategies when multiple motion trajectories overlap. + +
strategyFID↓FVD↓PSNR↑SSIM↑EPE↓
Average13.183.417.50.632.7
Random12.283.517.80.642.6
+ +Table 12: Impact of using a dense-to-sampling training strategy. + +
StrategyFID↓FVD↓PSNR↑SSIM↑EPE↓
Dense-to-Sampling12.984.217.50.622.6
Sampling12.283.517.80.642.6
+ +![](images/f48740da58fbea8687f15a8a6351d557273e8e98c589c69834a5b2fed345b401.jpg) + +![](images/4bd00af0826e9d970c4c76c454858e2bdc1ebf8f4acc2628a46c0d7ccf727875.jpg) + +![](images/54e9b536eddf186e0edce6ce0be35afa35bb4a67a9231e816e16d227f68867a0.jpg) + +![](images/b27d3eb5a366db035a438a590bc295634221e1793fda61ba8999378bba8facd5.jpg) + +![](images/ed563032048cdfad8cb3b66f590c5e9c207e519cb9171103e9a69175c84e8006.jpg) + +![](images/d8138a369e006b8252ade983b340b9718f46b30be17915bc6999483dcd2b50a3.jpg) +Input first frame + +![](images/bc6e61cb76c35cd27213e6e1d04cb92020c1e5b5c09866c2cea6a0138eb59da1.jpg) +Figure 12: Wan-Move generalizes to continue controlling motion trajectories when they temporarily disappears. Green circles indicate visible segments, while red circles mark invisible segments, e.g., occluded or out-of-frame parts. + +![](images/cc2955ec53641f6b6e6eacc76d062031ba5d88043f8cc8720daddefd47235119.jpg) +Generated video + +![](images/1df623e2e70e41d1dbb12fa43494963d750b3320785440c937b1fa9fe6ef0919.jpg) + +![](images/1215b6855c3fc9b83c847f3ced4aa84d331523a4777e1b201b62249f76aa6e6c.jpg) + +the sequence length, for a total of 30,000 steps. During inference, we follow Wan's sampling scheme with 50 sampling steps. + +# 9 Additional Experiments + +# 9.1 Choice of Feature Replication Strategies under Trajectory Overlap + +We analyze the impact of feature replication strategies in cases of trajectory overlap. The results, presented in Table 11, demonstrate that randomly selecting a single trajectory's first-frame feature for replication when multiple trajectories coincide yields superior video quality and motion control. This is evidenced by lower FVD and EPE compared to feature averaging. We hypothesize that averaging features from overlapping trajectories leads to information loss, thereby degrading performance. + +# 9.2 Choice of Different Training Strategies + +This subsection evaluates the performance differences between dense-to-sampling and direct sampling training strategies, as presented in Table 12. Prior work [14, 15, 2, 53] commonly employs a two-stage dense-to-sampling pipeline, where the first stage uses dense motion trajectories to enhance motion control followed by sparse trajectories in the second stage. However, we find that our model, trained with randomly sampling of 1-200 points (as refer to Sec. 3.3 in the main paper), achieves comparable EPE and lower FVD compared to the two-stage approach. These results demonstrate that our method provides generalization capability in point trajectory numbers while simplifying the training process. This generalization ability is also verified in Table 12 of the main paper. + +# 9.3 Model Performance Under Trajectory Disappearance + +Fig. 12 illustrates our model's capability to generate motion-coherent videos when handling temporarily invisible trajectories. The Wan-Move maintains stable generation quality in these challenging scenarios, which we attribute to both the presence of similar cases in the training data and the model's inherent generalization capacity. + +# 9.4 Failure Cases + +This subsection analyzes and visualizes three primary failure modes of Wan-Move, as illustrated in Fig. 13. First, control degradation occurs when motion trajectories remain invisible for extended durations, causing the model to lose conditional guidance. Second, performance deteriorates in visually complex scenes with multiple interacting objects in crowded environments. Third, implausible + +![](images/857e21de2b3c7376c064ea6b90d2ea9d87294ae0d3603671c8a75cb90ce1d238.jpg) + +![](images/cbc6df91687ed2ddb9a55cc9214be1a7923846e37aebc03d5709b9ebdcebb968.jpg) +(a) Losing motion control due to persistent trajectory disappearance + +![](images/b20319deaa65e8400b92f41116ea422eae972c28f31bfba77040d915eacbc4c8.jpg) + +![](images/cde7a06af1840dd6b2e4d3a95036b82e66b0b9fc8ac66d4b70c61f9aa689a948.jpg) + +![](images/471aa5b8acf52521830fca1fad5b4442d00c91b0e8d0858d72dedd0123a7e6a6.jpg) + +![](images/10d334dc62eb08575ffd12fc4d2a53d4af74e69b29d7abc95d0a85efd70c89a6.jpg) + +![](images/ed4f441aa046cff2335f9c9efbfa10e7220d3487a4adcc70abab79bd927b4349.jpg) +(b) Visual artifacts under complex and crowded environment + +![](images/d32df68d5619744b720dcc1dc46d51d9e0e8e2a37b19a559d5b5f10cd2e9c452.jpg) + +![](images/fae6e82ffc839c5b08ce5a95cc14fd8c07762dc5d0af659f45e53e7f5774fe31.jpg) + +![](images/b92aec95dc79809900098028ff611212fbadcb344231796d408e12272b912b5e.jpg) + +![](images/31e9c54fc1a44862a303aeedf58ea57e7baaa554c6de0b04c7a3a0adfba33793.jpg) + +![](images/1b8f4f6b286872bce2832ac8bfcc6f15622138a0711ef0a57168a1ada3fd9a39.jpg) + +![](images/9bf4803d5aaca2fdd36300ebb24c44b3d637a3bb7fa33d8f23a33d68675354fb.jpg) + +![](images/10711a81ee0e452cd9b9158c36262d884c1654c748d512a7f5292b3532e264c5.jpg) + +![](images/aff88b69123a2d8e602295c720d21d47ceb3ff0b4b760b919e5f6bb7aef03bae.jpg) + +![](images/44b4ae24c82d816707d834a7c5fe5843642d88f4e32f49956f5123843b0bbdd3.jpg) +Figure 13: Three primary failure modes of Wan-Move. (a) Loss of motion control due to persistent trajectory disappearance; (b) Visual artifacts in overly complex, crowded environments; and (c) motion outputs that violate rigorous physical laws. + +![](images/1794c4f6dcf0d1f5f4a7e4ae209ae4a82b1dc2056cd1d5540722d5f48e5c110a.jpg) +(c) Violating rigorous physical laws + +![](images/06957cb2f4eb7c4c4a3de90f52b65b6d2a6ab65a7c47d262e4f4b32e37db7fb1.jpg) + +![](images/9c374c4c2f283f1ecc5599b1e6009006f3b29720c1d757398a217d6c5ce14d2a.jpg) + +![](images/4ad4ba9169f193a459ca5344593769b869b41c7b1bf6a503aa25ea9ef2295adb.jpg) + +motion trajectories that violate fundamental physical laws result in out-of-distribution predictions. Furthermore, erroneous tracking points identified by CoTracker [68] may compound these failure modes. + +# 10 Qualitative Visualizations + +# 10.1 More Qualitative Comparisons + +We present additional qualitative comparisons with state-of-the-art academic [14] and commercial [4] approaches, as shown in Fig. 14. + +# 10.2 More Camera Control Results + +As demonstrated in Fig. 15, Wan-Move enables camera control. This can be accomplished, following the work [13], by estimating a point cloud using a monocular depth predictor [76], projecting it along a predefined camera trajectory, and applying z-buffering to derive occlusion flags and camera-aligned 2D trajectories. + +# 10.3 More Motion Transfer Results + +This subsection presents dense motion transfer visualizations generated by Wan-Move using dense point trajectories (1,024 in our implementation). As illustrated in Fig. 16, Wan-Move achieves nearly identical appearance quality and motion alignment compared to the original videos given dense + +![](images/239f972c57e14af0bd8bdc458545110f8fcd9d13c432061af4d660061b149288.jpg) +Figure 14: Additional qualitative comparisons with Tora [14] and the commercial model Kling 1.5 Pro [4]. Wan-Move demonstrates superior motion accuracy and visual quality. Major motion control failures or visual artifacts are denoted with red boxes. + +trajectory conditions and the same first frame. Moreover, Wan-Move also enables video editing by copying the motion while using an additional image editing model to modify the content in the first frame, maintaining the original video's motion trajectories, as shown in Fig. 17. + +# 10.4 More 3D Rotation Results + +As illustrated in Fig. 18, Wan-Move additionally supports 3D object rotation. This capability is realized by first estimating depth-based 3D positions, applying a rotational transformation, and then reprojecting the results into 2D trajectories. These trajectories subsequently serve as conditioning inputs for our model to rotate the objects in videos. + +![](images/34beff1e43b422f4c953688cad39921fb7b0e0494ac097b3fe1ddc58606d58eb.jpg) +Figure 15: Wan-Move enables effective and flexible camera control through different point trajectories, such as linear displacement, dolly in, and dolly out. + +![](images/df18962cb640cd22a1a6fbc39cf024a36f370404b1d1454a07932a6720e75c4a.jpg) +Figure 16: Wan-Move enables accurate video motion copy using dense point trajectories (e.g., 1024 points). The synthesized video preserves high fidelity in both appearance and object-level motion alignment with the original video, even under complex environmental conditions. + +![](images/c05722cb709b356e5da3d1284c8839f4826e0f52228f08652aae8bf938b69b05.jpg) +Figure 17: Wan-Move enables video editing through motion copy and additional image editing models. It first applies the image editing model (e.g., ControlNet [18], GPT-4o [83]) to modify the style or content of the first frame, then uses the original video's motion trajectories to animate the edited image frame. + +![](images/97e0c21661182ed9e0e20d8ba9498958bc5b5244489a2ff69f8e4416a93306b2.jpg) +Figure 18: Wan-Move enables object 3D rotation by estimating depth-based positions, applying a rotation, and projecting the results to 2D. \ No newline at end of file diff --git a/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/images.zip b/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1185d2d130ed9227156d6bade8e18de4dff53f03 --- /dev/null +++ b/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:76e74f755b5dc067bb192ba8f7105b096ef2e53feb3dbb756cd34f974196b503 +size 1795536 diff --git a/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/layout.json b/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e932832c20c6f2039b39ccf6b9268e6fc32f354b --- /dev/null +++ b/NeurIPS/2025/Wan-Move_ Motion-controllable Video Generation via Latent Trajectory Guidance/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17829ca25f5c08ade8f1b565d7fae4a5456d082dcf6b025e18a903a5686e965c +size 963649 diff --git a/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/af368978-d2c7-4003-9396-5a0b96abab81_content_list.json b/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/af368978-d2c7-4003-9396-5a0b96abab81_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..96d2ed953b76920980d7fa6c845152d7a20fd8e7 --- /dev/null +++ b/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/af368978-d2c7-4003-9396-5a0b96abab81_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6add4ff66ab868be2600d75e8fbb9601e6e560d0fe0c6d7399ae753428bc858 +size 137780 diff --git a/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/af368978-d2c7-4003-9396-5a0b96abab81_model.json b/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/af368978-d2c7-4003-9396-5a0b96abab81_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b8fe11b8de78ec534dca44dc38c7c65d42c1b37b --- /dev/null +++ b/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/af368978-d2c7-4003-9396-5a0b96abab81_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1251dd91f70a3f714b2aa2712848191bc90d9bdde49a23f9a1212d916f92e3d9 +size 180930 diff --git a/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/af368978-d2c7-4003-9396-5a0b96abab81_origin.pdf b/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/af368978-d2c7-4003-9396-5a0b96abab81_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a9ac27f2b6486f764a93b081d5b250151264d473 --- /dev/null +++ b/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/af368978-d2c7-4003-9396-5a0b96abab81_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ae2dec9695e8678dc64297a7aef0691d372829f73c7a5b9baedfcf304cd5456 +size 14763992 diff --git a/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/full.md b/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/full.md new file mode 100644 index 0000000000000000000000000000000000000000..004e917325eddda3b3a720b0aa74b64e350120e0 --- /dev/null +++ b/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/full.md @@ -0,0 +1,684 @@ +# WarpGAN: Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting + +Kaitao Huang $^{1}$ , Yan Yan $^{1\dagger}$ , Jing-Hao Xue $^{2}$ , Hanzi Wang $^{1}$ + +1Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, P.R. China + +$^{2}$ Department of Statistical Science, University College London, UK huangkt@stu.xmu.edu.cn, yanyan@xmu.edu.cn + +![](images/e2d4706c632373f63fdbf03bc2cbc1c7999af6cb2e0f81de3c514b1c5809ed1d.jpg) +Figure 1: Visual examples. Given a single input image (the first row), our WarpGAN synthesizes images from five novel views: front, right, left, top, and down (the second to the sixth rows). + +# Abstract + +3D GAN inversion projects a single image into the latent space of a pre-trained 3D GAN to achieve single-shot novel view synthesis, which requires visible regions with high fidelity and occluded regions with realism and multi-view consistency. However, existing methods focus on the reconstruction of visible regions, while the generation of occluded regions relies only on the generative prior of 3D GAN. As a result, the generated occluded regions often exhibit poor quality due to the information loss caused by the low bit-rate latent code. To address this, we introduce the warping-and-inpainting strategy to incorporate image inpainting into 3D GAN inversion and propose a novel 3D GAN inversion method, WarpGAN. Specifically, we first employ a 3D GAN inversion encoder to project the single-view image into a latent code that serves as the input to 3D GAN. Then, we perform warping to a novel view using the depth map generated by 3D GAN. Finally, we develop a novel SVINet, which leverages the symmetry prior and multi-view image correspondence w.r.t. the same latent code to perform inpainting of occluded regions in the warped image. Quantitative and qualitative experiments demonstrate that our method consistently outperforms several state-of-the-art methods. + +# 1 Introduction + +GANs [13] have made remarkable progress in synthesizing unconditional images. In particular, StyleGAN [20, 21] has achieved photorealistic quality on high-resolution images. Several extensions [15, 31, 36] leverage the latent space (i.e., the $\mathcal{W}$ space) to control semantic attributes (e.g., expression and age). However, these 2D GANs suffer from inferior control over geometrical aspects of generated images, leading to multi-view inconsistency for viewpoint manipulation. + +Recently, with the development of neural radiance fields (NeRF) [27] in novel view synthesis (NVS), a variety of 3D GANs [2, 5, 6, 14, 29, 39, 41] have been proposed to integrate NeRF into style-based generation, resulting in remarkable success in generating highly realistic images. Based on it, 3D GAN inversion methods project a single image into the latent space of a pre-trained 3D GAN generator, obtaining a latent code. Hence, the viewpoint of the input image can be changed by altering the camera pose, and the image attributes can be easily edited by modifying the latent code. Unlike 2D GAN inversion, 3D GAN inversion aims to generate images that maintain both the faithfulness of the input view and the high quality of the novel views. + +On the one hand, existing 3D GAN inversion methods rely only on the generative prior of 3D GANs for generating the occluded regions (i.e., the invisible regions in the input image) in the novel viewpoint, resulting in unfaithful reconstruction of occluded regions in complex scenarios. On the other hand, for 3D scene generation, several recent methods adopt a warping-and-inpainting strategy. They [11, 30, 35] first predict a depth map of a given image, and then warp the input image to novel camera viewpoints with the depth-based correspondence, followed by a 2D inpainting network to synthesize high-fidelity occluded regions of the warped images. + +To address the inferior reconstruction capability of occluded regions in existing 3D GAN inversion methods, motivated by the success of the warping-and-inpainting strategy in 3D scene generation, we introduce image inpainting into 3D GAN inversion. Unfortunately, 3D GAN inversion is dedicated to training with single-view datasets, while the above 3D scene generation methods usually require multi-view datasets for training. This leads to two issues: (1) multi-view inconsistency due to the lack of 3D information (i.e., the real novel view image) to guide the inpainting process; (2) the unavailability of ground-truth images from novel views to compute the loss during model training. + +In this paper, we propose a novel 3D GAN inversion method, WarpGAN, by integrating the warping-and-inpainting strategy into 3D GAN inversion. Specifically, we first train a 3D GAN inversion encoder, which projects the input image into a latent code $w^{+}$ (located in the latent space $\mathcal{W}^{+}$ of the 3D GAN generator). By feeding $w^{+}$ into 3D GAN, we compute the depth map of the input image for geometric warping and perform an initial filling of the occluded regions in the warped image. Subsequently, leveraging the symmetry prior [43, 45] and multi-view image correspondence w.r.t. the same latent code in 3D GANs, we train a style-based novel view inpainting network (SVINet). It can inpaint the occluded regions in the warped image from the original view to the novel view. Hence, we can synthesize plausible novel view images with multi-view consistency. To address the unavailability of ground-truth images, we re-warp the image in the novel view back to the original view and feed it to SVINet. Hence, the loss can be calculated between the inpainting result and the input image. Some visual examples obtained by WarpGAN are given in Fig. 1. + +In summary, the contributions of this paper are as follows: + +- We propose a novel 3D GAN inversion method, WarpGAN, which successfully introduces the warping-and-inpainting strategy into 3D GAN inversion, substantially enhancing the quality of occluded regions in novel view synthesis. +- We introduce a style-based novel view inpainting network, SVINet, by fully leveraging the symmetry prior and the same latent code generated by 3D GAN inversion, achieving multi-view consistency inpainting on the occluded regions of warped images in novel views +- We perform extensive experiments to validate the superiority of WarpGAN, showing the great potential of the warping-and-inpainting strategy in 3D GAN inversion. + +# 2 Related work + +3D-Aware GANs. Recent advancements in 3D-Aware GANs [2, 5, 6, 14, 29, 39, 41] effectively combine the high-quality 2D image synthesis of StyleGAN [20, 21] with the multi-view synthesis + +capability of NeRF [27], advancing high-quality image synthesis from 2D to 3D and enabling multi-view image generation. These methods typically employ a two-stage generation pipeline, where a low-resolution raw image and feature maps are rendered, followed by upsampling to high-resolution using 2D CNN layers. Such a way ensures geometric consistency across multiple views and achieves impressive photorealism. In this paper, we leverage EG3D [5] as our 3D-aware GAN architecture, which introduces a hybrid explicit-implicit 3D representation (known as the tri-plane). + +GAN Inversion. Although recent 2D GAN inversion methods [42] have achieved promising editing performance, they suffer from severe flickering and inevitable multi-view inconsistency when editing 3D attributes (e.g., head pose) since the pretrained generator is not 3D-aware. Hence, 3D GAN inversion is developed to maintain multi-view consistency when rendering novel viewpoints. However, directly transferring 2D methods to 3D without effectively incorporating 3D information will inevitably lead to geometry collapse and artifacts. + +Similar to 2D GAN inversion, 3D GAN inversion can be categorized into optimization-based methods and encoder-based methods. Some optimization-based methods [23, 43, 45] generate multiple pseudo-images from different viewpoints to facilitate optimization. For instance, HFGI3D [43] leverages visibility analysis to achieve pseudo-multi-view optimization; SPI [45] utilizes the facial symmetry prior to synthesize pseudo multi-view images; and Pose Opt. [23] simultaneously optimizes camera pose and latent codes. In addition, In-N-Out [44] optimizes a triplane for out-of-distribution object reconstruction and employs composite volume rendering. Encoder-based methods project the input image into the latent space of the 3D GAN generator and then employ the generative capacity of the 3D GAN to synthesize novel-view images, while fully utilizing the input image to reconstruct the visible regions of the novel-view images. For example, GOAE [46] computes the residual between the input image and the reconstructed image to complement the $\mathcal{F}$ space of the generator, and introduces an occlusion-aware mix tri-plane for novel-view image generation; Triplanenet [3] calculates an offset for the triplane based on the residual and proposes a facial symmetry prior loss; and Dual Encoder [4] employs two encoders (one for visible regions and the other for occluded regions) for inversion and introduces an occlusion-aware triplane discriminator to enhance both fidelity and realism. + +Our method is intrinsically different from existing methods that rely heavily on 3D GAN generative priors to generate occluded regions. Our method introduces a novel inpainting network to fill the occluded regions, facilitating the generation of rich details. + +Depth-based Warping for Single-shot Novel View Synthesis. Some 3D GAN inversion methods [23, 43, 45] use depth-based warping to synthesize pseudo multi-view images for optimization. SPI [45] warps the input image to an adjacent view for pseudo-supervision. Pose Opt. [23] warps the image from the canonical viewpoint to the input viewpoint to assist training. HFGI3D [43] utilizes a 3D GAN to fill the occluded regions of the warped image from the input view to novel views, synthesizing several pseudo novel-view images. However, these methods only rely on a 3D GAN to generate occluded regions, failing to achieve satisfactory results in occluded regions under complex scenarios. + +Recently, some methods follow the warping-and-inpainting strategy on single-shot NVs for general scenes [11, 30, 35]. They first predict a depth map for the input image, then warp the input image to a novel view using the depth map, and finally perform inpainting on the occluded regions in the novel view. This way can effectively preserve the information of the input image while leveraging the powerful inpainting capability of 2D inpainting networks to generate reasonable content for occluded regions. Inspired by this strategy, we introduce a 2D inpainting network into 3D GAN inversion by effectively exploiting the symmetry prior and the latent code of the input image. + +# 3 Methodology + +# 3.1 Overview + +As shown in Fig. 2, our WarpGAN consists of a 3D GAN inversion network (including a 3D GAN inversion encoder and a 3D-aware GAN) and a style-based novel view inpainting network (SVINet). First, we utilize a 3D GAN inversion encoder $\mathrm{E}_{w^{+}}$ to project the input image $\mathbf{I}$ into the latent space $\mathcal{W}^{+}$ of the 3D GAN generator, obtaining the latent code $w^{+}$ . Based on this, we utilize a rendering decoder to render the depth map $\mathbf{D}$ of $\mathbf{I}$ and the novel view image $\hat{\mathbf{I}}_{novel}^{w^{+}}$ . Under the guidance of the depth map $\mathbf{D}$ , we warp the input image $\mathbf{I}$ from the original view $c$ to the novel view $c_{novel}$ , thereby obtaining the warped image $\mathbf{I}_{c\rightarrow c_{novel}}^{warp}$ and the occluded regions $\mathbf{M}_{c\rightarrow c_{novel}}^{o}$ of the input image in the + +![](images/a45f0715cd026c5ac39fcc09e741668fe0aa113601f3f4347da528a21eaf69c7.jpg) +Figure 2: Overview of our WarpGAN, which consists of a 3D GAN inversion network and a style-based novel view inpainting network (SVINet). The "Forward warp" flow (blue arrows) illustrates the inference process of novel view synthesis. During model training, we also require the "Reverse warp" flow (red arrows) to warp the novel view image back to the original view for loss computation. + +target view, that is, + +$$ +\mathbf {I} _ {c \rightarrow c _ {\text {n o v e l}}} ^ {\text {w a r p}}, \mathbf {M} _ {c \rightarrow c _ {\text {n o v e l}}} ^ {o} = \operatorname {w a r p} (\mathbf {I}; \mathbf {D}, \pi_ {c \rightarrow c _ {\text {n o v e l}}}, K), \tag {1} +$$ + +where $\pi_{c\to c_{novel}}$ is a relative camera pose between $c$ and $c_{novel}$ , $K$ is the camera intrinsic matrix, and $\operatorname{warp}(\cdot)$ is a geometric warping function [28, 35] which unprojects pixels of the input image $\mathbf{I}$ with its depth map $\mathbf{D}$ to the 3D space, and reprojects them based on $\pi_{c\to c_{novel}}$ and $K$ . + +Then, we use $\hat{\mathbf{I}}_{novel}^{w^+}$ to fill in the occluded regions of $\mathbf{I}_{c\rightarrow c_{novel}}^{warp}$ , serving as the initial result $\hat{\mathbf{I}}_{novel}^{initial}$ for the occluded regions, which can be formulated as + +$$ +\hat {\mathbf {I}} _ {n o v e l} ^ {i n i t i a l} = \mathbf {I} _ {c \rightarrow c _ {n o v e l}} ^ {w a r p} + \mathbf {M} _ {c \rightarrow c _ {n o v e l}} ^ {o} \cdot \hat {\mathbf {I}} _ {n o v e l} ^ {w ^ {+}}. \tag {2} +$$ + +Subsequently, the initial result $\hat{\mathbf{I}}_{\text {novel }}^{\text {initial }}$ is fed into SVINet for further inpainting, giving the final output $\hat{\mathbf{I}}_{\text {novel }}$ of WarpGAN. Notably, we employ symmetry-aware feature extraction and modulate the convolutions of the inpainting network with $w^{+}$ during the inpainting process. We also construct a style-based loss to ensure consistency between the generated image in the novel view and the original view image. + +# 3.2 3D GAN Inversion Encoder + +Similar to existing encoder-based 3D GAN inversion methods, our 3D GAN inversion encoder $\mathrm{E}_{w^{+}}$ projects an input image $\mathbf{I}$ with the camera pose $c$ into the latent space $\mathcal{W}^+$ of the pre-trained 3D GAN, obtaining the latent code $w^{+} = \mathrm{E}_{w^{+}}(\mathbf{I})$ . Then, we leverage the generator $\mathrm{G}(\cdot)$ of 3D GAN to generate the tri-plane and use the rendering decoder $\mathcal{R}$ to render images at specified camera poses. Based on above, we perform image reconstruction $\hat{\mathbf{I}}^{w^{+}} = \mathcal{R}(\mathrm{G}(w^{+}), c)$ by specifying the camera pose as $c$ . In this way, we obtain the novel view image $\hat{\mathbf{I}}_{novel}^{w^{+}}$ corresponding to the novel camera pose $c_{novel}$ . Under the principles of NeRF, we replace the color of the sampling points with the distance to the camera during the rendering process, obtaining the depth maps $\mathbf{D}$ and $\mathbf{D}_{novel}$ . More implementation details can be found in the Appendix. + +Inspired by GOAE [46], we employ a pyramid-structured Swin-Transformer [26] as the backbone of the encoder, based on which we leverage feature layers at different scales to generate latent codes at various levels. + +Since our dataset contains only single-view images, we train $\mathrm{E}_{w^{+}}$ using a reconstruction loss $\mathcal{L}_{w^{+}}$ , which includes a pixel-wise (MSE) loss $\mathcal{L}_2$ , a perceptual loss $\mathcal{L}_{\mathrm{LPIPS}}$ [48], and an identity loss $\mathcal{L}_{\mathrm{ID}}$ with a pre-trained ArcFace network [12]: + +$$ +\mathcal {L} _ {w ^ {+}} \left(\hat {\mathbf {I}} ^ {w ^ {+}}, \mathbf {I}\right) = \lambda_ {2} \mathcal {L} _ {2} \left(\hat {\mathbf {I}} ^ {w ^ {+}}, \mathbf {I}\right) + \lambda_ {\text {L P I P S}} \mathcal {L} _ {\text {L P I P S}} \left(\hat {\mathbf {I}} ^ {w ^ {+}}, \mathbf {I}\right) + \lambda_ {\mathrm {I D}} ^ {w ^ {+}} \mathcal {L} _ {\mathrm {I D}} \left(\hat {\mathbf {I}} ^ {w ^ {+}}, \mathbf {I}\right), \tag {3} +$$ + +where $\lambda_{2}$ , $\lambda_{\mathrm{LPIPS}}$ , and $\lambda_{\mathrm{ID}}^{w+}$ denote the loss weights for $\mathcal{L}_2$ , $\mathcal{L}_{\mathrm{LPIPS}}$ , and $\mathcal{L}_{\mathrm{ID}}$ , respectively. + +# 3.3 Style-Based Novel View Inpainting Network (SVINet) + +Due to the existence of occluded regions in the novel view, the warped image contains "holes" (see Fig. 2 for an illustration). To generate high-quality novel-view images, we propose a style-based novel view inpainting network (SVINet) to fill in the "holes" in the warped image. + +As shown in Fig. 2, our SVINet follows the traditional "encode-inpaint-decoder" architecture [10, 24, 37], consisting of three sub-networks: $N_{E}$ , $N_{I}$ , and $N_{D}$ . Technically, $N_{E}$ is first used to extract features from the model input while performing downsampling. Then, the inpainting operation is performed in the feature space by using $N_{I}$ . Finally, $N_{D}$ is used to upsample the features to obtain the inpainted image. + +# 3.3.1 Symmetry-Aware Feature Extraction + +We first use the novel-view image $\hat{\mathbf{I}}_{\text{novel}}^{w^+}$ obtained from 3D GAN inversion to fill in the occluded regions in the warped image $\mathbf{I}_{c \rightarrow c_{\text{novel}}}^{warp}$ (Eq. (1)), resulting in an initial inpainting result $\hat{\mathbf{I}}_{\text{novel}}^{\text{initial}}$ (Eq. (2)). We then feed $\hat{\mathbf{I}}_{\text{novel}}^{\text{initial}}$ into $N_E$ to obtain the feature $\mathbf{F}$ . In addition, we also propose to leverage the facial symmetry [43, 45] by warping the mirrored input image $\mathbf{I}_{\text{mirror}}$ to the target view $c_{\text{novel}}$ , obtaining $\mathbf{I}_{\text{mirror}_{c_{\text{mirror}} \rightarrow c_{\text{novel}}}}^{warp}$ . The mirrored image is then processed in the same manner as described above and fed into $N_E$ to obtain the mirror feature $\mathbf{F}_{\text{mirror}}$ . + +Subsequently, we utilize $\mathbf{F}$ and $\mathbf{F}_{\text {mirror }}$ to predict the scale map $\mathbf{F}_s$ and the translation map $\mathbf{F}_t$ , which can be used to refine $\mathbf{F}$ via featurewise linear modulation (FiLM) [32], obtaining $\mathbf{F}_r$ , that is, + +$$ +\begin{array}{l} \left. \left\{\mathbf {F} _ {s}, \mathbf {F} _ {t} \right\} = \left\{\phi_ {s} \left(\left[ \mathbf {F}, \mathbf {F} _ {\text {m i r r o r}} \right] _ {1}\right), \phi_ {t} \left(\left[ \mathbf {F}, \mathbf {F} _ {\text {m i r r o r}} \right] _ {1}\right) \right\}, \right. \\ \mathbf {F} _ {r} = \mathbf {F} _ {s} \odot \mathbf {F} + \mathbf {F} _ {t}, \\ \end{array} +$$ + +where $\phi_s$ and $\phi_t$ are convolutional neural networks; $[,]_1$ denotes concatenation along the 1th dimension, i.e., the channel dimension; $\odot$ denotes the Hadamard product. + +Next, $\mathbf{F}^r$ is successively fed into $N_I$ and $N_D$ to obtain the inpainting result $\hat{\mathbf{I}}_{\text{novel}}$ . + +# 3.3.2 Style-Based Impainting + +Inpainting networks typically rely on the information of the input image to fill in the missing regions. However, due to the limited information contained in single-view images, using only this information for inpainting may lead to the issue of multi-view inconsistency. To address the consistency issue, motivated by the fact that images of the same object from different viewpoints share the same latent code in 3D GANs, we introduce the latent code to control the image inpainting process. + +Technically, we modulate the convolutions [21, 24] in the "inpaint" and "decoder" parts of the inpainting network using the latent code $w^{+}$ obtained from $\mathrm{E}_{w^{+}}$ . This modulation of the convolutions facilitates us to control the inpainting process for occluded regions, achieving multi-view consistency in the generated images. + +Specifically, we first employ a mapping function $\mathcal{A}$ to obtain the style code $s = \mathcal{A}(w^{+})$ . Then the weights of the convolutions $w$ are modulated as + +$$ +\begin{array}{l} w _ {i j k} ^ {\prime} = s _ {i} \cdot w _ {i j k}, \\ w _ {i j k} ^ {\prime \prime} = w _ {i j k} ^ {\prime} / \sqrt {\sum_ {i , k} w _ {i j k} ^ {\prime 2} + \epsilon}, \tag {5} \\ \end{array} +$$ + +where $w''$ denotes the final modulated weights; $s_i$ is the scale corresponding to the $i$ th input feature map; $j$ and $k$ enumerate the output feature maps and spatial footprint of the convolution, respectively. + +# 3.3.3 Training strategy + +Real data. Since our real dataset contains only single-view images, no target-view images can be used to compute the loss and update the model parameters when synthesizing images from novel views. To address this, we propose to re-warp the warped image from the novel view back to the original view, and then compute the loss between the inpainting result and the input image. + +Specifically, for the input image $\mathbf{I}$ , we first warp it to the novel view $c_{\text{novel}}$ to obtain $\mathbf{I}_{c \rightarrow c_{\text{novel}}}^{\text{warp}}$ , and then inpaint it using SVINet to get $\hat{\mathbf{I}}_{\text{novel}}$ . Next, we re-warp $\mathbf{I}_{c \rightarrow c_{\text{novel}}}^{\text{warp}}$ back to the source view $c$ and inpaint it again to obtain $\hat{\mathbf{I}}_{\text{re-warp}}^{\text{re-warp}}$ . Based on the above, given the input image $\mathbf{I}$ , we obtain two inpainted images $\hat{\mathbf{I}}_{\text{novel}}$ and $\hat{\mathbf{I}}_{\text{re-warp}}^{\text{re-warp}}$ for loss computation. + +Synthetic data. In addition to real data, we also utilize synthetic data to assist in training our model. We sample a latent code $w_{synth}$ from the latent space of 3D GAN and generate two images $\mathbf{I}_s^{synth}$ and $\mathbf{I}_t^{synth}$ from different viewpoints. We then warp $\mathbf{I}_s^{synth}$ from the source view to the target view and input it into SVINet to obtain the inpainted image $\hat{\mathbf{I}}_t^{synth}$ . Finally, we compute the loss between $\hat{\mathbf{I}}_t^{synth}$ and $\mathbf{I}_t^{synth}$ . + +Loss function. Our loss function consists of three components: the reconstruction loss, the consistency loss, and the adversarial loss. The reconstruction loss $\mathcal{L}_{\mathrm{rec}}$ includes the pixel-wise MAE loss $\mathcal{L}_1$ , the perceptual loss $\mathcal{L}_{\mathrm{P}}$ [37], and the identity loss $\mathcal{L}_{\mathrm{ID}}$ [12]: + +$$ +\mathcal {L} _ {\text {r e c}} (\hat {\mathbf {I}}, \mathbf {I}) = \lambda_ {1} \mathcal {L} _ {1} (\hat {\mathbf {I}} - \mathbf {I}) + \lambda_ {\mathrm {P}} \mathcal {L} _ {\mathrm {P}} (\hat {\mathbf {I}}, \mathbf {I}) + \lambda_ {\mathrm {I D}} \mathcal {L} _ {\mathrm {I D}} (\hat {\mathbf {I}}, \mathbf {I}), \tag {6} +$$ + +where $\lambda_{1},\lambda_{\mathrm{P}}$ , and $\lambda_{\mathrm{ID}}$ denote the loss weights for $\mathcal{L}_1,\mathcal{L}_{\mathrm{P}}$ , and $\mathcal{L}_{\mathrm{ID}}$ , respectively; $\bar{\mathbf{I}}$ and $\mathbf{I}$ represent the input image and the generated image, respectively. + +To ensure multi-view consistency, we introduce the consistency loss $\mathcal{L}_{\mathrm{c}}$ , which computes the MSE between the latent codes of the original image and the inpainted image. This loss is used to control the multi-view consistency of the generated images: + +$$ +\mathcal {L} _ {\mathrm {c}} (\hat {\mathbf {I}}, \mathbf {I}) = | | \mathrm {E} _ {w ^ {+}} (\hat {\mathbf {I}}), \mathrm {E} _ {w ^ {+}} (\mathbf {I}) | | _ {2}. \tag {7} +$$ + +To further enhance the quality of the inpainted images, we also use an adversarial loss: + +$$ +\mathcal {L} _ {\mathrm {a d v}} ^ {G} = - \mathbb {E} [ \log (D (\hat {x})) ], \tag {8} +$$ + +$$ +\mathcal {L} _ {\mathrm {a d v}} ^ {D} = - \mathbb {E} [ \log (D (x)) ] - \mathbb {E} [ \log (1 - D (\hat {x})) ] + \gamma \mathbb {E} [ \| \nabla D (x) \| _ {2} ], \tag {9} +$$ + +where $x$ denotes the real and synthetic images (i.e., $\mathbf{I}$ and $\hat{\mathbf{I}}_t^{synth}$ ); $\hat{x}$ represents the inpainted images (i.e., $\hat{\mathbf{I}}_{\text{novel}}, \hat{\mathbf{I}}^{\text{re-warp}}$ , and $\hat{\mathbf{I}}_t^{synth}$ ); $D$ denotes the discriminator [10, 24, 37]. + +In summary, the loss function for SVINet can be formulated as follows: + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {S V I N e t}} = \lambda_ {\mathrm {r e c}} \mathcal {L} _ {\mathrm {r e c}} \left(\left[ \hat {\mathbf {I}} ^ {r e - w a r p}, \hat {\mathbf {I}} _ {t} ^ {s y n t h} \right] _ {0}, \left[ \mathbf {I}, \mathbf {I} _ {t} ^ {s y n t h} \right] _ {0}\right) \\ + \lambda_ {\mathrm {c}} \mathcal {L} _ {\mathrm {c}} \left(\left[ \hat {\mathbf {I}} _ {\text {n o v e l}}, \hat {\mathbf {I}} ^ {\text {r e - w a r p}}, \hat {\mathbf {I}} _ {t} ^ {\text {s y n t h}} \right] _ {0}, \left[ \mathbf {I}, \mathbf {I}, \mathbf {I} _ {t} ^ {\text {s y n t h}} \right] _ {0}\right) + \lambda_ {\mathrm {a d v}} \mathcal {L} _ {\mathrm {a d v}} ^ {G}, \tag {10} \\ \end{array} +$$ + +where $[,]_0$ denotes concatenation along the 0-th dimension (i.e., the batch dimension); $\lambda_{\mathrm{rec}}$ , $\lambda_{\mathrm{c}}$ , and $\lambda_{\mathrm{adv}}$ denote the loss weights for $\mathcal{L}_{\mathrm{rec}}$ , $\mathcal{L}_{\mathrm{c}}$ , and $\mathcal{L}_{\mathrm{adv}}^G$ , respectively. + +# 4 Experiments + +# 4.1 Experimental Settings + +Datasets. Our experiments mainly focus on face datasets. We use the FFHQ dataset [20] and 100K pairs of synthetic data for training. The synthetic pairs $\{\mathbf{I}_s^{synth},\mathbf{I}_t^{synth}\}$ are generated from EG3D [5], sharing the same latent code $w_{synth}$ but rendered with different camera poses. To evaluate the generalization ability of our method, we employ the CelebA-HQ dataset [19] and the multi-view MEAD dataset [40] for testing. We preprocess the images in the datasets and extract their camera poses in the same manner as [5]. + +Implementation Details. For all experiments, we employ the EG3D [5] generator pre-trained on FFHQ. For the 3D GAN inversion encoder $\mathrm{E}_{w^{+}}$ , we set the batch size to 4 and train it for $500\mathrm{K}$ + +Table 1: Comparisons with state-of-the-art methods on the CelebA-HQ and MEAD datasets. + +
CategoryMethodCelebA-HQMEADTime (s)↓
FID↓ID↑LPIPS ↓FID ↓ID ↑
±30°±60°±30°±60°±30°±60°
OptimizationSG2 W+26.090.73690.29100.337239.3064.470.79920.753343.72
PTI25.700.76160.27710.334144.2366.000.80890.758262.65
Pose Opt.29.040.75000.29900.342852.2573.230.79540.740591.60
HFGI3D24.300.76410.27750.349451.2479.810.80190.7370264.5
EncoderpSp38.460.73750.31160.372065.2194.340.79000.74010.05430
GOAE35.410.74980.28180.345359.6986.230.81090.73700.07999
Triplanenet32.650.77060.33790.410376.62130.550.80590.71350.1214
Ours19.120.78820.24900.300838.1564.010.83150.77410.08390
+ +![](images/8679da22d64d399625556808c02d6a6197dd9770832275c3d29cf287ddc37aa5.jpg) +Figure 3: Comparisons of novel view synthesis on the CelebA-HQ dataset between our WarpGAN and several state-of-the-art methods. + +iterations on the FFHQ dataset. We use the Ranger optimizer, which combines Rectified Adam [25] with the Lookahead technique [47], with learning rates of 1e-4 for $\mathrm{E}_{w^{+}}$ . The values of $\lambda_{2}$ , $\lambda_{\mathrm{LPIPS}}$ , and $\lambda_{\mathrm{ID}}^{w^{+}}$ in Eq. (3) are set to 1.0, 0.8, and 0.1. For SVINet, we set the batch size to 2 and train it for 300K iterations on both the FFHQ dataset and synthetic data pairs. For the novel view camera poses during the training process, we sample from the camera poses of the pose-rebalanced FFHQ dataset [5]. We use the Adam optimizer [22], with learning rates of 1e-3 and 1e-4 for the SVINet and discriminator, respectively. The values of $\lambda_{1}$ , $\lambda_{\mathrm{P}}$ , and $\lambda_{\mathrm{ID}}$ in Eq. (6) are set to 10.0, 30.0, and 0.1, respectively. The values of $\lambda_{\mathrm{rec}}$ , $\lambda_{\mathrm{c}}$ , and $\lambda_{\mathrm{adv}}$ in Eq. (10) are set to 1.0, 0.1, and 10.0, respectively. + +Baselines. We compare our WarpGAN with several 3D GAN inversion methods, including optimization-based methods (such as SG2 $\mathcal{W}^+$ [1], PTI [34], Pose Opt. [23], and HFGI3D [43]) and encoder-based methods (such as pSp [33], GOAE [46], Triplanenet [3], and Dual Encoder [4]). Note that Dual Encoder employs a 3D GAN other than EG3D and removes the background during training. This is different from our experimental setup, we only compare it in the qualitative analysis. + +Evaluation metrics. We perform novel view synthesis evaluation on the CelebA-HQ dataset and the MEAD dataset. For the CelebA-HQ dataset, we compute the Fréchet Inception Distance (FID) [17] and ID similarity [12] between the original images and the novel view images. For the multi-view MEAD dataset, each person includes five face images with increasing yaw angles (front, $\pm 30^{\circ}$ , and $\pm 60^{\circ}$ ). We use the front image as input and synthesize the other four views. We then compute the LPIPS [48], FID, and ID similarity between the synthesized images and their corresponding ground-truth images. The inference times (Time) in Table 1 are measured on a single Nvidia GeForce RTX 4090 GPU. + +![](images/7b592177aa20acaf08e462c12c77484425948bb6c71ce9cde92df67d4178f641.jpg) +Figure 4: Comparisons of different methods on the MEAD dataset for synthesizing images of the other four views (R60, R30, L30, and L60) using the front image as input. + +# 4.2 Comparisons with State-of-the-Art Methods + +Quantitative Evaluation. As shown in Table 1, we provide the performance of different methods on the CelebA-HQ dataset and the MEAD dataset. It can be clearly observed that optimization-based methods achieve better performance than encoder-based methods, but at the cost of significantly higher inference times. Among them, HFGI3D, which performs optimization twice using PTI (once for filling the occluded regions of warped images and once for multi-view optimization), shows substantial performance improvement but suffers from slow inference times. In contrast, our WarpGAN, which has an inference time comparable to encoder-based methods, surpasses the performance of optimization-based methods. The excellent performance on the MEAD dataset demonstrates that our method is capable of effectively preserving multi-view consistency. + +Qualitative Evaluation. We provide visualization results of novel view synthesis in Fig. 3 and Fig. 4. By successfully integrating the warping-and-inpainting strategy into 3D GAN inversion, our method can better preserve facial details and generate more reasonable occluded regions. Moreover, our method is capable of maintaining 3D consistency in novel views more naturally. + +# 4.3 Ablation Studies + +Table 2: Ablation on different components of our WarpGAN. + +
NameModelFID ↓ID ↑
AEw+36.070.7437
Bw/o SVINet29.280.7735
Cw/o Modw+ & Lc19.710.7879
Dw/o Modw+19.470.7880
Ew/o symmetry20.040.7825
Fw/o synth data19.180.7880
GFull Model19.120.7882
+ +To investigate the contributions of key components in our method, we conduct ablation studies. In Table 2, we compare the quality of novel view synthesis using different model variants on the CelebA-HQ dataset. + +Comparing "B" and "G" clearly demonstrates the significant role of SVINet in inpainting occluded regions. Comparing "C", "D", and "G" shows that modulating the convolutions of SVINet with $w^{+}$ and incorporating $\mathcal{L}_c$ enhance the performance of our method. Comparing "E" + +![](images/8ebd1b64818f76cb206fd76f9fed0ba4e89d2163a5e7949f0f270eb31a6df6f0.jpg) + +![](images/3fbab413b905b00f7fc445534c0059889b6b8c96ae8833cab5cb3783f128b5ce.jpg) + +![](images/499b111722a3a5bdacc3082b87cfbdc1879c044a9da3ed130e333a5b22cf8d2d.jpg) +Figure 5: (a) Qualitative comparisons of the Full Model with model variants "C", "D", and "E"; (b) Some failure cases; (c) Comparisons of image attribute editing effects with PTI and HFGI3D. + +and "G" indicates that leveraging facial symmetry prior helps generate occluded regions in novel views. Comparing "F" and "G" reveals that training with synthetic data slightly improves the quality of novel view synthesis. We also qualitatively compare "C", "D", "E", and "G" (Full Model) in Fig. 5(a). Incorporating the latent code to control the inpainting process of SVINet and the symmetry prior can provide more information, reduce blurring and artifacts, and generate more detailed results. + +# 4.4 Editing Application + +Since our WarpGAN achieves novel view synthesis by inpainting warped images, the visible parts of the novel view images are minimally affected by the latent code. Consequently, manipulating the latent code alone does not enable attribute editing of the image. To address this issue, similar to HFGI3D [43], we utilize WarpGAN to synthesize a series of novel view images, which are then fed into PTI [34] for optimization. This process yields an optimized latent code $w_{opt}^{+}$ and a fine-tuned 3D GAN generator. In this way, attribute editing of the input image and novel view rendering can be achieved by editing $w_{opt}^{+}$ [15, 31, 36] and modifying the camera pose $c$ . As shown in Fig. 5(c), we perform attribute editing on the input image for four attributes: "Glasses", "Anger", "Old", and "Young", and compare the results with those from PTI and HFGI3D. It can be observed that the edited images obtained by using multi-view images synthesized by WarpGAN for optimization assistance exhibit higher fidelity and appear more natural. + +# 5 Conclusion + +In this paper, motivated by the achievement of the warping-and-inpainting strategy in 3D scene generation, we successfully integrate image inpainting with 3D GAN inversion and propose a novel 3D GAN inversion method, WarpGAN, for high-quality novel view synthesis from a single image. Our WarpGAN consists of a 3D GAN inversion network and SVINet. Specifically, we first obtain the depth of the input image using 3D GAN inversion, then apply depth-based warping to the input image to obtain the warped image, and finally use SVINet to fill in the occluded regions of the warped image. Notably, our SVINet leverages symmetry prior and the latent code for multi-view consistency inpainting. Extensive qualitative and quantitative experiments demonstrate that our method outperforms existing state-of-the-art optimization-based and encoder-based methods. + +Limitations. Due to the inevitable errors in the depth map [11, 30, 35], the warped image sometimes become unreliable, which in turn prevents our SVINet from eliminating such artifacts. As illustrated in Fig. 5(b), when the angle variation is small, SVINet can alleviate the deformation of the eyes. However, as the angle of change increases, the output of SVINet deteriorates. + +# Acknowledgments and Disclosure of Funding + +This work was supported by the National Natural Science Foundation of China under Grant 62372388 and Grant U21A20514, the Major Science and Technology Plan Project on the Future Industry Fields of Xiamen City under Grant 3502Z20241029 and Grant 3502Z20241027, and the Fundamental Research Funds for the Central Universities under Grant 20720240076 and Grant ZYGX2021J004. + +# References + +[1] R. Abdal, Y. Qin, and P. Wonka. Image2StyleGAN++: How to edit the embedded images? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8296-8305, 2020. +[2] S. An, H. Xu, Y. Shi, G. Song, U. Y. Ogras, and L. Luo. Panohead: Geometry-aware 3d full-head synthesis in 360deg. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20950-20959, 2023. +[3] A. R. Bhattacharai, M. Nießner, and A. Sevastopolsky. Triplanenet: An encoder for eg3d inversion. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 3055-3065, 2024. +[4] B. B. Bilecen, A. Gokmen, and A. Dundar. Dual encoder GAN inversion for high-fidelity 3d head reconstruction from single images. Advances in Neural Information Processing Systems, pages 87357-87385, 2024. +[5] E. R. Chan, C. Z. Lin, M. A. Chan, K. Nagano, B. Pan, S. De Mello, O. Gallo, L. J. Guibas, J. Tremblay, S. Khamis, et al. Efficient geometry-aware 3d generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16123-16133, 2022. +[6] E. R. Chan, M. Monteiro, P. Kellnhofer, J. Wu, and G. Wetzstein. pi-GAN: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5799-5809, 2021. +[7] X. Chen, H. Fan, R. Girshick, and K. He. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020. +[8] L. Chi, B. Jiang, and Y. Mu. Fast fourier convolution. Advances in Neural Information Processing Systems, pages 4479-4488, 2020. +[9] Y. Choi, Y. Uh, J. Yoo, and J.-W. Ha. Stargan v2: Diverse image synthesis for multiple domains. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8188-8197, 2020. +[10] T. Chu, J. Chen, J. Sun, S. Lian, Z. Wang, Z. Zuo, L. Zhao, W. Xing, and D. Lu. Rethinking fast fourier convolution in image inpainting. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 23195–23205, 2023. +[11] J. Chung, S. Lee, H. Nam, J. Lee, and K. M. Lee. Luciddreamer: Domain-free generation of 3d gaussian splatting scenes. arXiv preprint arXiv:2311.13384, 2023. +[12] J. Deng, J. Guo, N. Xue, and S. Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4690-4699, 2019. +[13] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. Advances in Neural Information Processing Systems, 2014. +[14] J. Gu, L. Liu, P. Wang, and C. Theobalt. StyleNeRF: A style-based 3d-aware generator for high-resolution image synthesis. In Proceedings of International Conference on Learning Representations, 2022. +[15] E. Härkönen, A. Hertzmann, J. Lehtinen, and S. Paris. GANSpace: Discovering interpretable GAN controls. Advances in Neural Information Processing Systems, pages 9841-9850, 2020. +[16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778, 2016. +[17] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. GANs trained by a two time-scale update rule converge to a local nash equilibrium. Advances in Neural Information Processing Systems, 30, 2017. + +[18] J. T. Kajiya and B. P. Von Herzen. Ray tracing volume densities. ACM SIGGRAPH Computer Graphics, pages 165-174, 1984. +[19] T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017. +[20] T. Karras, S. Laine, and T. Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4401-4410, 2019. +[21] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila. Analyzing and improving the image quality of StyleGAN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8110-8119, 2020. +[22] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +[23] J. Ko, K. Cho, D. Choi, K. Ryoo, and S. Kim. 3d GAN inversion with pose optimization. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2967-2976, 2023. +[24] W. Li, Z. Lin, K. Zhou, L. Qi, Y. Wang, and J. Jia. Mat: Mask-aware transformer for large hole image inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10758-10768, 2022. +[25] L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and J. Han. On the variance of the adaptive learning rate and beyond. In Proceedings of International Conference on Learning Representations, 2020. +[26] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10012-10022, 2021. +[27] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, pages 99-106, 2021. +[28] S. Niklaus and F. Liu. Softmax splatting for video frame interpolation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5437-5446, 2020. +[29] R. Or-El, X. Luo, M. Shan, E. Shechtman, J. J. Park, and I. Kemelmacher-Shlizerman. StyleSDF: High-resolution 3d-consistent image and geometry generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13503–13513, 2022. +[30] H. Ouyang, K. Heal, S. Lombardi, and T. Sun. Text2immersion: Generative immersive scene with 3d gaussians. arXiv preprint arXiv:2312.09242, 2023. +[31] O. Patashnik, Z. Wu, E. Shechtman, D. Cohen-Or, and D. Lischinski. StyleCLIP: Text-driven manipulation of StyleGAN imagery. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2085–2094, 2021. +[32] E. Perez, F. Strub, H. De Vries, V. Dumoulin, and A. Courville. Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI Conference on Artificial Intelligence, 2018. +[33] E. Richardson, Y. Alaluf, O. Patashnik, Y. Nitzan, Y. Azar, S. Shapiro, and D. Cohen-Or. Encoding in style: A StyleGAN encoder for image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2287–2296, 2021. +[34] D. Roich, R. Mokady, A. H. Bermano, and D. Cohen-Or. Pivotal tuning for latent-based editing of real images. ACM Transactions on Graphics, pages 1–13, 2022. +[35] J. Seo, K. Fukuda, T. Shibuya, T. Narihira, N. Murata, S. Hu, C.-H. Lai, S. Kim, and Y. Mitsufuji. Genwarp: Single image to novel views with semantic-preserving generative warping. Advances in Neural Information Processing Systems, 2024. +[36] Y. Shen, C. Yang, X. Tang, and B. Zhou. InterFaceGAN: Interpreting the disentangled face representation learned by GANs. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 2004–2018, 2020. +[37] R. Suvorov, E. Logacheva, A. Mashikhin, A. Remizova, A. Ashukha, A. Silvestrov, N. Kong, H. Goka, K. Park, and V. Lempitsky. Resolution-robust large mask inpainting with fourier convolutions. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2149–2159, 2022. + +[38] O. Tov, Y. Alaluf, Y. Nitzan, O. Patashnik, and D. Cohen-Or. Designing an encoder for StyleGAN image manipulation. ACM Transactions on Graphics, pages 1-14, 2021. +[39] A. Trevithick, M. Chan, T. Takikawa, U. Iqbal, S. De Mello, M. Chandraker, R. Ramamoorthi, and K. Nagano. What you see is what you GAN: Rendering every pixel for high-fidelity geometry in 3d GANs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22765–22775, 2024. +[40] K. Wang, Q. Wu, L. Song, Z. Yang, W. Wu, C. Qian, R. He, Y. Qiao, and C. C. Loy. Mead: A large-scale audio-visual dataset for emotional talking-face generation. In Proceedings of European Conference on Computer Vision, pages 700–717, 2020. +[41] Y. Wu, J. Zhang, H. Fu, and X. Jin. Lpff: A portrait dataset for face generators across large poses. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 20327-20337, 2023. +[42] W. Xia, Y. Zhang, Y. Yang, J.-H. Xue, B. Zhou, and M.-H. Yang. GAN inversion: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 3121-3138, 2022. +[43] J. Xie, H. Ouyang, J. Piao, C. Lei, and Q. Chen. High-fidelity 3d GAN inversion by pseudo-multi-view optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 321-331, 2023. +[44] Y. Xu, Z. Shu, C. Smith, S. W. Oh, and J.-B. Huang. In-n-out: Faithful 3d GAN inversion with volumetric decomposition for face editing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7225-7235, 2024. +[45] F. Yin, Y. Zhang, X. Wang, T. Wang, X. Li, Y. Gong, Y. Fan, X. Cun, Y. Shan, C. Oztireli, et al. 3d GAN inversion with facial symmetry prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 342-351, 2023. +[46] Z. Yuan, Y. Zhu, Y. Li, H. Liu, and C. Yuan. Make encoder great again in 3d GAN inversion through geometry and occlusion-aware encoding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2437-2447, 2023. +[47] M. Zhang, J. Lucas, J. Ba, and G. E. Hinton. Lookahead optimizer: k steps forward, 1 step back. Advances in Neural Information Processing Systems, 32, 2019. +[48] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 586-595, 2018. + +# NeurIPS Paper Checklist + +# 1. Claims + +Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? + +Answer: [Yes] + +Justification: The abstract and introduction clearly outline the proposed WarpGAN method, its components, and the improvements in novel view synthesis, accurately reflecting the paper's contributions and scope. + +Guidelines: + +- The answer NA means that the abstract and introduction do not include the claims made in the paper. +- The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. +- The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. +- It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. + +# 2. Limitations + +Question: Does the paper discuss the limitations of the work performed by the authors? + +Answer: [Yes] + +Justification: We discuss the limitations of our method in Sec. 5. + +Guidelines: + +- The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. +- The authors are encouraged to create a separate "Limitations" section in their paper. +- The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. +- The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. +- The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. +- The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. +- If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. +- While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. + +# 3. Theory assumptions and proofs + +Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? + +# Answer: [NA] + +Justification: Our paper does not involve theoretical results or their proofs. + +# Guidelines: + +- The answer NA means that the paper does not include theoretical results. +- All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced. +- All assumptions should be clearly stated or referenced in the statement of any theorems. +- The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. +- Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. +- Theorems and Lemmas that the proof relies upon should be properly referenced. + +# 4. Experimental result reproducibility + +Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? + +# Answer: [Yes] + +Justification: We provide the network architecture and the training strategy in Sec. 3, the dataset usage and hyperparameter settings in Sec. 4.1, and additional details in the Appendix, which fully disclose the information needed to reproduce the main experimental results. + +# Guidelines: + +- The answer NA means that the paper does not include experiments. +- If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. +- If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. +- Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. +- While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example + +(a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. +(b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. +(c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). +(d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. + +# 5. Open access to data and code + +Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? + +Answer: [Yes] + +Justification: We submit the code in the supplementary material, and all the datasets used are publicly available. + +Guidelines: + +- The answer NA means that paper does not include experiments requiring code. +- Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details. +- While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). +- The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details. +- The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. +- The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. +- At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). +- Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. + +# 6. Experimental setting/details + +Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? + +Answer: [Yes] + +Justification: We provide all the training and test details, including datasets, implementation details, baselines, and evaluation metrics in Sec. 4.1. + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. +- The full details can be provided either with the code, in appendix, or as supplemental material. + +# 7. Experiment statistical significance + +Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? + +Answer: [No] + +Justification: Our task is image generation rather than prediction. Same as existing 3D GAN inversion methods, we use metrics such as FID, ID, and LPIPS, which do not include error bars. + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. + +- The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). +- The method for calculating the error bars should be explained (closed form formula call to a library function, bootstrap, etc.) +- The assumptions made should be given (e.g., Normally distributed errors). +- It should be clear whether the error bar is the standard deviation or the standard error of the mean. +- It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified. +- For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). +- If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. + +# 8. Experiments compute resources + +Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? + +Answer: [Yes] + +Justification: We provide the execution times for various methods in Table 1 and specify the computational resources used in Sec. 4.1. + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. +- The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. +- The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). + +# 9. Code of ethics + +Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines? + +Answer: [Yes] + +Justification: The research conducted in the paper conforms with the NeurIPS Code of Ethics. + +Guidelines: + +The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. +- If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. +- The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). + +# 10. Broader impacts + +Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? + +Answer:[Yes] + +Justification: We discuss the societal impacts in the Appendix. + +Guidelines: + +- The answer NA means that there is no societal impact of the work performed. + +- If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. +- Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. +- The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. +- The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. +- If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). + +# 11. Safeguards + +Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? + +Answer: [Yes] + +Justification: We describe the potential risks in the Appendix. + +Guidelines: + +- The answer NA means that the paper poses no such risks. +- Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. +- Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. +- We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. + +# 12. Licenses for existing assets + +Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? + +Answer: [Yes] + +Justification: We properly cite all the papers involved in our work. + +Guidelines: + +- The answer NA means that the paper does not use existing assets. +- The authors should cite the original paper that produced the code package or dataset. +- The authors should state which version of the asset is used and, if possible, include a URI. +- The name of the license (e.g., CC-BY 4.0) should be included for each asset. +- For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. + +- If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. +- For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. +- If this information is not available online, the authors are encouraged to reach out to the asset's creators. + +# 13. New assets + +Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? + +Answer: [NA] + +Justification: We do not introduce new assets in this paper. + +Guidelines: + +- The answer NA means that the paper does not release new assets. +- Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. +- The paper should discuss whether and how consent was obtained from people whose asset is used. +- At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. + +# 14. Crowdsourcing and research with human subjects + +Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? + +Answer: [NA] + +Justification: Our paper does not involve crowdsourcing nor research with human subjects. + +Guidelines: + +- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. +- Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. +- According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. + +# 15. Institutional review board (IRB) approvals or equivalent for research with human subjects + +Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? + +Answer: [NA] + +Justification: The paper neither involves crowdsourcing nor research with human subjects. + +Guidelines: + +- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. +- Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. + +We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. +- For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. + +# 16. Declaration of LLM usage + +Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required. + +Answer: [NA] + +Justification: The core method development in this research does not involve LLMs as any important, original, or non-standard components. + +Guidelines: + +- The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components. +- Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described. + +# A Additional Architecture Details + +Detailed Structure of SVINet. LaMa [37] introduces fast Fourier convolutions (FFCs) [8] into image inpainting, achieving a receptive field that covers the whole image even in the early network layers. Such a way can facilitate the inpainting of large missing areas. To effectively fill in the occluded regions of the warped image, our SVINet is built upon the framework of LaMa and consists of three sub-networks: $N_{E}$ , $N_{I}$ , and $N_{D}$ . For an input image with the size of $512 \times 512$ , $N_{E}$ includes 3 downsampling convolutional layers that downsample the input image to a feature map with the size of $64 \times 64$ ; $N_{I}$ contains 9 FFC residual blocks, each of which consists of two FFCs and a residual connection, for inpainting; and $N_{D}$ consists of 3 upsampling convolutional layers to upsample the image resolution back to the size of $512 \times 512$ . The convolutions in $N_{I}$ and $N_{D}$ are modulated by the latent code $w^{+}$ from the 3D GAN inversion encoder $\mathrm{E}_{w^{+}}$ . Note that, each FFC contains three convolutional branches and one spectral transform branch, and the convolutions within the spectral transform are also modulated, as shown in Fig. 6. + +![](images/a54dd6113cec3dcf58f027eb602ca39dd42c5f72bd9605bb057e469c042d7813.jpg) +Figure 6: The detailed structure of fast Fourier convolution modulated by the latent code $w^{+}$ . + +![](images/03eeda9f4845d725c58dc85ef2ef69aac6ed1f83570b15350287bdbc57d53c34.jpg) + +# B Additional Implementation Details + +# B.1 Principles of Neural Radiance Fields + +Neural Radiance Fields (NeRF) [27] employs a fully-connected deep network, which maps a 3D spatial location $\mathbf{x}$ and a viewing direction $\mathbf{d}$ to color $\mathbf{c}$ and density $\sigma$ , to represent a scene. By querying $\mathbf{x}$ and $\mathbf{d}$ along camera rays and applying classical volume rendering techniques [18], the color and density information can be projected into a 2D image. Specifically, for each projected ray $\mathbf{r}$ corresponding to a given pixel, $N_{s}$ points (denoted as $\{t_i\}_{i=1}^{N_s}$ ) are sampled along the ray. For each sampled point, the estimated color and density are represented as $\mathbf{c}_i$ and $\sigma_i$ , respectively. The RGB value $C(\mathbf{r})$ for each ray can then be computed via volumetric rendering as follows: + +$$ +C (\mathbf {r}) = \sum_ {i = 1} ^ {N _ {s}} T _ {i} \left(1 - \exp \left(- \sigma_ {i} \delta_ {i}\right)\right) \mathbf {c} _ {i}, \tag {11} +$$ + +where $T_{i} = \exp (-\sum_{j = 1}^{i - 1}\sigma_{j}\delta_{j})$ , and $\delta_i = t_{i + 1} - t_i$ denotes the distance between adjacent samples. + +Similarly, if we replace the color $c_{i}$ of each sampled point with the distance $t_{i}$ from the sampling point to the camera during volumetric rendering, the depth $d(\mathbf{r})$ along each ray can be obtained as + +$$ +d (\mathbf {r}) = \sum_ {i = 1} ^ {N _ {s}} T _ {i} \left(1 - \exp \left(- \sigma_ {i} \delta_ {i}\right)\right) t _ {i}. \tag {12} +$$ + +# B.2 Multi-View Optimization for Editing + +Our WarpGAN synthesizes novel view images not only based on the results of 3D GAN inversion but also relies on the warping results of the input image. Thus, only modifying the latent code within our method is difficult to achieve desirable editing effects. Inspired by HFGI3D [43], we employs WarpGAN to generate $N$ novel view images $\{\mathbf{I}_i\}_{i=1}^N$ corresponding to $N$ different camera poses $\{c_i\}_{i=1}^N$ to assist the optimization process of PTI [34], denoted as WarpGAN-Opt. + +Specifically, for a single input image $\mathbf{I}$ with the camera pose $c$ , we first employ an optimization-based GAN inversion method [1] to jointly optimize the latent code $w^{+}$ and the noise vector $n$ in the 3D GAN generator: + +$$ +w _ {o p t} ^ {+}, n = \underset {w ^ {+}, n} {\arg \min } \mathcal {L} _ {2} (\mathcal {R} (\mathbf {G} (w ^ {+}, n; \theta), c), \mathbf {I}) + \lambda_ {n} \mathcal {L} _ {n} (n), \tag {13} +$$ + +where $\mathcal{L}_n$ is a noise regularization term and $\lambda_{n}$ is a hyperparameter [34]. + +Subsequently, we fix the optimized latent code $w_{opt}^{+}$ and fine-tune the 3D GAN generator based on the input image I and a series of novel view images synthesized by our WarpGAN: + +$$ +\theta^ {*} = \arg \min _ {\theta} \mathcal {L} _ {\mathrm {G}} \left(\mathcal {R} \left(\mathbf {G} \left(w _ {o p t} ^ {+}; \theta\right), c\right), \mathbf {I}\right) + \lambda_ {m v} \sum_ {i} ^ {N} \mathcal {L} _ {\mathrm {G}} \left(\mathcal {R} \left(\mathbf {G} \left(w _ {o p t} ^ {+}; \theta\right), c _ {i}\right), \mathbf {I} _ {i}\right), \tag {14} +$$ + +$$ +\mathcal {L} _ {\mathrm {G}} = \lambda_ {2} ^ {\mathrm {G}} \mathcal {L} _ {2} + \lambda_ {\text {L P I P S}} ^ {\mathrm {G}} \mathcal {L} _ {\text {L P I P S}}, \tag {15} +$$ + +where $\lambda_{mv}$ is set to 1.0; both $\lambda_2^G$ and $\lambda_{\mathrm{LPIPS}}^{\mathrm{G}}$ are set to 1.0. + +After the aforementioned process, we obtain the optimized latent code $w_{opt}^{+}$ and the 3D GAN generator with tuned weights $\theta^{*}$ . To generate attribute-edited images from different viewpoints, we simply modify $w_{opt}^{+}$ [31, 36], specify the desired camera pose $c_{novel}$ , and feed them into the 3D GAN to obtain the edited image $\hat{\mathbf{I}}_{novel}^{edit}$ in the novel view, that is, + +$$ +\mathbf {\hat {I}} _ {n o v e l} ^ {e d i t} = \mathcal {R} \left(\mathrm {G} \left(w _ {o p t} ^ {+} + \alpha \mathbf {n} _ {a t t}; \theta^ {*}\right), c _ {n o v e l}\right), \tag {16} +$$ + +where $\mathbf{n}_{att}$ denotes a specific direction for attribute editing and $\alpha$ is a scaling factor. + +# C Broader Impacts + +Our proposed method, which enables novel view synthesis and attribute editing of faces from a single image, holds the potential to significantly impact various fields such as film, gaming, augmented reality (AR), and virtual reality (VR). However, it also raises concerns regarding privacy and ethics, particularly the risk of generating "deep fakes". We emphasize the necessity of implementing robust safeguards to ensure the responsible and ethical application of this technology, thereby minimizing the risk of misuse. + +# D Additional Qualitative Results + +Additional Qualitative Evaluation. We provide more visual comparisons between our WarpGAN and several state-of-the-art methods in Fig. 7. In addition, since we utilize multi-view images synthesized by WarpGAN to assist 3D GAN inversion optimization for editing, we also include comparisons with this optimization-based method (WarpGAN-Opt). We can see that, due to the limitations of the low bit-rate latent code, WarpGAN-Opt loses some detail compared with WarpGAN. However, by leveraging the high-quality novel view images synthesized by WarpGAN, WarpGAN-Opt achieves higher fidelity and realism in novel view synthesis than other optimization-based methods. From the figure, it can be observed that our method outperforms Dual Encoder [4]. However, since our method relies on the visible regions of the input image in the novel view to inpaint occluded regions, our method degrades to a typical encoder-based 3D GAN inversion when the view change is large and the visible region is small. In contrast, Dual Encoder focuses on high-fidelity 3D head reconstruction and thus offers greater flexibility in terms of view changes. + +Additional Attribute Editing Results. To more comprehensively demonstrate the capability of our method in image attribute editing, we provide additional attribute editing results in Fig. 8. Specifically, + +![](images/3e12291841753910a5edb68534e134efa63d6ac639cd118f31d1811d4760a5b0.jpg) +Figure 7: Qualitative comparisons between our WarpGAN and several state-of-the-art methods. + +we employ InterFaceGAN [36] for editing the "Anger", "Old", and "Young" attributes, and utilize the text-guided semantic editing method StyleCLIP [31] for editing the "Elsa" and "Surprised" attributes. + +Reference-Based Style Editing. In our WarpGAN, the latent code plays a crucial role in controlling the inpainting process of SVINet. To more explicitly analyze the influence of the latent code, we + +![](images/fd7e765fbab9e4f290918d663e7cd34960763c550a3cf2567764ca90a7f722fa.jpg) +Input + +![](images/7a0ff5779f25bebc8354ba96e062c89b3a6268c3de6c086cba3a90104d4bbfd8.jpg) +Elsa Anger Old Young Surprised + +![](images/c61ba58a4fe51577c90ac2ab777757fdfbc5d39d666e927de5fefa34f7c1de83.jpg) +Input + +![](images/3e65fe0e419d87c1bd359b8ddb61b99f23bf380e3c4e24d8c5e9e3c33d836cb2.jpg) +Elsa Anger Old Young Surprised + +![](images/1a0036902951f972236c3a737d896a242d8276b136a737bc7d1b051a2c33874d.jpg) +Input + +![](images/4e8ce900a1b5ef357ac6e82fafef39c89a4429e1a484bd1b98043d5feb7ed509.jpg) +Elsa Anger Old Young Surprised + +![](images/7587c97eabf1b0b7e874c0317d401b3d5bd65f16c9786a1a3c7e1c22beea550f.jpg) +Input + +![](images/baff61a0bcc872a9672f99c384fe2f1bf4e925d734f37bac2f3e5e7c74a694ea.jpg) +Elsa Anger Old Young Surprised +Figure 8: Image attribute editing results obtained by our method. The edited attributes include "Elsa", "Anger", "Old", "Young", and "Surprised". + +![](images/c9bf22dc8a0b23586c809017258dfc600ff0f7a0b558a853282987c5eeeb404b.jpg) +Input + +![](images/1b6705855fc2609d6d4119f1a3475d90f684b0bb37653868bbffa68dae2c3f87.jpg) +Elsa Anger Old Young Surprised + +![](images/15e32903d97450134642b2d28e9e092ac837c8c9b487d4c1d2b8044f184c77ca.jpg) +Input + +![](images/bef4cc9019b79bc1e4e296d97e4407dc9b874a0ec81c2ed091990e852effa042.jpg) +Elsa Anger Old Young Surprised + +![](images/ba3ab64018b9180ccc6c58ac241ffa5b7b20b36ad477af069e46d7bb74039f82.jpg) +Input + +![](images/e11d3f25d5c8a3040637cafb3c4056b7ec03961639894973a851caa4ee83d9f5.jpg) +Elsa Anger Old Young Surprised + +![](images/91fb526f21a9800844c49e4b98bd2249f8a51a332a335a53d70ebdb33d616764.jpg) +Input + +![](images/205c6c62a6ee448bfe0cffd5a849f3df9eb8269f66e9b160aac1afce82c16636.jpg) +Elsa Anger Old Young Surprised + +![](images/9304cca042c583023fb7511cdf23bb2e447ec4e5d6c6add09e9a2a54f8edb463.jpg) +Figure 9: Reference-based style editing. Each row represents the editing results of the same source image corresponding to different reference images, where the source and reference images are identical along the diagonal. + +perform experiments by replacing the latent code of the input image during the inpainting process. Specifically, for the source image $\mathbf{I}_s$ with the camera pose $c_{s}$ and the latent code $w_{s}^{+}$ , we replace them with the camera pose $c_{r}$ and the latent code $w_{r}^{+}$ of the reference image $\mathbf{I}_r$ during inpainting, thereby achieving simultaneous editing of view and style. The results are given in Fig. 9. + +For our SVINet, $w^{+}$ modulates the convolutions in both $N_{I}$ and $N_{D}$ , where $N_{I}$ processes feature maps at a resolution of $64 \times 64$ , and $N_{D}$ processes feature maps at resolutions ranging from $64 \times 64$ to $512 \times 512$ . According to the characteristics of StyleGAN [20, 21], the latent code corresponding to feature maps at resolutions of $64 \times 64$ and above primarily controls the detailed features of the image, such as the color scheme and microstructure. From Fig. 9, we observe that the main changes are in the skin tone and hair color of the face. + +Qualitative Evaluation in the Cat Domain. To further validate the generalization capability of our method, we evaluate it in the cat domain. Specifically, we use the AFHQ-CAT dataset [9] for training and evaluation. Following e4e [38], we use a ResNet50 network [16] trained with MOCOv2 [7] instead of the pre-trained ArcFace network [12] to compute the identity loss in the non-facial domains during training. As shown in Fig. 10, our method can generalize well to the cat domain and perform novel view synthesis as well as attribute editing. + +![](images/d35b74817874d382b8854dc2c3bc0476590b01a07529a9fca75679f27a349460.jpg) +Figure 10: Novel view synthesis and attribute editing on cat faces by our method. We visualize the novel view synthesis results of WarGAN and WarpGAN-Opt, as well as the editing results of the attributes "Color" and "Small Eyes". \ No newline at end of file diff --git a/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/images.zip b/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7f7d92d775d145d3be199f3fff9e2200a8ae3d68 --- /dev/null +++ b/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cb286c6310d88d90f18ceece8a7c0e145e4f49679a17fd7035dd299d28c2a7f1 +size 1827764 diff --git a/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/layout.json b/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..85775003dbef58e6b24ec1eee79d9d12c40466f6 --- /dev/null +++ b/NeurIPS/2025/WarpGAN_ Warping-Guided 3D GAN Inversion with Style-Based Novel View Inpainting/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:078a33e327645c709a8445316f0bb487d140e6d11e8bdd718e8c5619ab5b38ac +size 840965 diff --git a/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/6ebde3fc-a9da-44da-b825-48b8ec933d4f_content_list.json b/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/6ebde3fc-a9da-44da-b825-48b8ec933d4f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..89a7254ef85dd561e90524b41d096b25300a6c02 --- /dev/null +++ b/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/6ebde3fc-a9da-44da-b825-48b8ec933d4f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:02d1686b33682c3d782ca09a541e7d29c103e20d3965e4a1d47b30613b4204b7 +size 403699 diff --git a/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/6ebde3fc-a9da-44da-b825-48b8ec933d4f_model.json b/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/6ebde3fc-a9da-44da-b825-48b8ec933d4f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..fba0bb1eecfa6365e26a59eb97a4411100b6acdd --- /dev/null +++ b/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/6ebde3fc-a9da-44da-b825-48b8ec933d4f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae09d4dd7721a56d5814edc72c64c1756c4a9afa1bd855e25ddae95e88cb6d74 +size 471168 diff --git a/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/6ebde3fc-a9da-44da-b825-48b8ec933d4f_origin.pdf b/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/6ebde3fc-a9da-44da-b825-48b8ec933d4f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..dfd650d9f8125938b290a7309075b17caa43711e --- /dev/null +++ b/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/6ebde3fc-a9da-44da-b825-48b8ec933d4f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:60a9af97bc17b3ada5bf0ddaa99ea32e2407554afa79b594ad14e7c015f791a4 +size 723569 diff --git a/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/full.md b/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/full.md new file mode 100644 index 0000000000000000000000000000000000000000..395f9b86010d43a3dffca33d6de34df4770ebe3a --- /dev/null +++ b/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/full.md @@ -0,0 +1,2493 @@ +# Wasserstein Convergence of Critically Damped Langevin Diffusions + +Stanislas Strasman $^{1*}$ Sobihan Surendran $^{1,2*}$ Claire Boyer $^{3}$ Sylvain Le Corff $^{1}$ + +Vincent Lemaire1 Antonio Ocello4 + +$^{1}$ Sorbonne Université and Université Paris Cité, CNRS, LPSM, F-75005 Paris, France + +2LOPF, Califrais' Machine Learning Lab, Paris, France + +$^{3}$ LMO, Université Paris-Saclay, UMR CNRS 8628, Institut Universitaire de France, Orsay, France + +$^{4}$ CREST, ENSAE, Institut Polytechnique de Paris, Palaiseau, France + +# Abstract + +Score-based Generative Models (SGMs) have achieved impressive performance in data generation across a wide range of applications and benefit from strong theoretical guarantees. Recently, methods inspired by statistical mechanics, in particular, Hamiltonian dynamics, have introduced Critically-damped Langevin Diffusions (CLDs), which define diffusion processes on extended spaces by coupling the data with auxiliary variables. These approaches, along with their associated score-matching and sampling procedures, have been shown to outperform standard diffusion-based samplers numerically. In this paper, we analyze a generalized dynamic that extends classical CLDs by introducing an additional hyperparameter controlling the noise applied to the data coordinate, thereby better exploiting the extended space. We further derive a novel upper bound on the sampling error of CLD-based generative models in the Wasserstein metric. This additional hyperparameter influences the smoothness of sample paths, and our discretization error analysis provides practical guidance for its tuning, leading to improved sampling performance. + +# 1 Introduction + +Recent surge in machine learning and artificial intelligence has driven substantial progress in generative modeling, particularly with the development of Score-based Generative Models (SGMs). These models build on the foundational works in Denoising Diffusion Probabilistic Models (DDPMs) by Sohl-Dickstein et al. (2015); Song and Ermon (2019); Ho et al. (2020) and the advances in score-matching techniques introduced by Hyvarinen and Dayan (2005); Vincent (2011). + +Score-based Generative Models (SGMs). SGMs are probabilistic models designed to create synthetic instances of a target distribution when only a genuine sample (e.g., a dataset of real-life images) is accessible. First, the forward process involves progressively perturbing the training distribution by adding noise to the data until its distribution approximately reaches an easy-to-sample distribution $\pi_{\infty}$ . Then, the backward process involves learning to reverse this noising dynamics by sequentially removing the noise. SGMs have quickly gained recognition for their ability to generate high-quality synthetic data. Their applications span diverse areas, including computer vision (Li et al., 2022; Lugmayr et al., 2022), natural language processing (Gong et al., 2023), and other domains where realistic data generation is crucial. This growing body of work has been comprehensively surveyed by Yang et al. (2023), highlighting the versatility and potential of diffusion models. In + +addition, SGMs provide a particularly interesting class of prior distributions to solve Bayesian inverse problems. Although they lack an explicit and tractable probability density function, a very active research area focuses on combining Monte Carlo guidance and SGMs to solve posterior sampling problems, Wu et al. (2023); Moufad et al. (2025); Victorino Cardoso et al. (2024). + +Critically-damped Langevin Diffusion (CLD). In Dockhorn et al. (2022), the authors proposed Critically-damped Langevin Diffusion as a second-order extension of conventional diffusion models. By introducing velocity variables alongside the usual state variables —much like in Hamiltonian Monte Carlo— CLD accelerates exploration of high-dimensional spaces and often yields better sample quality in practice. Although empirical work demonstrates the benefit of CLD over standard score-based models (Dockhorn et al., 2022), its theoretical underpinnings remain incomplete. Existing convergence guarantees are only expressed in terms of Kullback-Leibler divergence (Conforti et al., 2025; Chen et al., 2023) and fail to capture any computational advantage for kinetic dynamics, leaving a gap between observed performance and formal analysis. + +Contributions. We first discuss the challenges of establishing Wasserstein convergence under the standard assumptions used for Variance-Preserving (VP) or Variance-Exploding (VE) SGMs, where the forward process is elliptic (Gao et al., 2025; Strasman et al., 2025; Gentiloni-Silveri and Ocello, 2025; Bruno et al., 2025). We then provide, to the best of our knowledge, the first upper bound for CLD in the Wasserstein metric through coupling techniques under weaker assumptions, achieving convergence rates comparable to those of other SGMs. Crucially, this result is not implied by previous Kullback-Leibler divergence bounds (Conforti et al., 2025; Chen et al., 2023), and our proof technique differs significantly from existing Wasserstein analyses of diffusion models. + +However, it is possible to introduce a modified dynamics that includes an additional hyperparameter controlling the noise on the data coordinate of CLD, thereby restoring ellipticity and enabling an analysis closely aligned with that of VP and VE models, but formulated on an extended phase space with matrix-valued drifts and diffusions. This hyperparameter governs the smoothness of sample paths, allowing a detailed analysis of the generative error as a function of this smoothness parameter. Such analysis offers practical guidance for tuning this hyperparameter and potentially improves sampling performance compared to standard SGMs and CLD methods. The benefits of this additional parameterization are demonstrated numerically on challenging synthetic datasets. + +# 2 Notation and Background + +Notation. We use $\pi$ to denote probability distributions and $p$ to denote their corresponding densities with respect to the Lebesgue measure or another reference measure. The identity matrix of size $d$ is written $\mathbf{I}_d$ . For $x, y \in \mathbb{R}^d$ , we denote by $\langle x, y \rangle$ the standard inner product of $\mathbb{R}^d$ , by $\|\cdot\|$ the Euclidean norm for vectors and its induced operator norm for matrices. Let $\|\cdot\|_F$ be the Frobenius norm defined for $A \in \mathbb{R}^{d \times d}$ as $\|A\|_F \coloneqq \sqrt{\operatorname{Tr}(A^\top A)}$ . For symmetric matrices $A, B \in \mathbb{R}^{d \times d}$ , we write $A \preccurlyeq B$ to mean that $B - A$ is positive semidefinite. We denote the time derivative of a function by $\dot{f}(t) \coloneqq \frac{\mathrm{d}}{\mathrm{d}t} f(t)$ . We use the symbol $\otimes$ either for the Kronecker product when applied to matrices and for the product of probability measures when applied to distributions. The intended meaning will be clear from context. For any matrix $A \in \mathbb{R}^{d \times d}$ we denote its largest eigenvalue (resp. singular value) by $\lambda_{\max}(A)$ (resp. $\sigma_{\max}(A)$ ) and smallest eigenvalue by $\lambda_{\min}(A)$ (resp. $\sigma_{\min}(A)$ ). For random vectors $X, Y \in \mathbb{R}^d$ , define $\|X\|_{L_2} \coloneqq \left(\mathbb{E}\left[\|X\|^2\right]\right)^{1/2}$ and we write $X \perp Y$ to mean that $X$ is independent of $Y$ . The notation $\mathcal{L}(X)$ denotes the law (distribution) of a random vector $X$ . For $a, b \in \mathbb{R}$ , we write $a \wedge b \coloneqq \min\{a, b\}$ and $a \vee b \coloneqq \max\{a, b\}$ . + +Score-based Generative Models. SGMs employ a Gaussian Markovian diffusion process that smoothly transports the target data distribution $\pi_{\mathrm{data}} \in \mathcal{P}(\mathbb{R}^d)$ towards an easy-to-sample Gaussian distribution $p_{\infty} \in \mathcal{P}(\mathbb{R}^d)$ . This process, known as forward diffusion, is the solution to the following stochastic differential equation (SDE) on a fixed time horizon $t \in [0, T]$ , + +$$ +\mathrm {d} \overrightarrow {X} _ {t} = - \alpha \beta (t) \overrightarrow {X} _ {t} \mathrm {d} t + \sqrt {2 \beta (t)} \mathrm {d} B _ {t}, \quad \overrightarrow {X} _ {0} \sim \pi_ {\text {d a t a}}, \tag {1} +$$ + +with $(B_{t})_{t\in [0,T]}$ a $d$ -dimensional Brownian motion and $\beta (t):[0,T]\to \mathbb{R}_{+}$ . In particular, when $\alpha = 0$ and $\beta (t)$ is of the form $\beta^{\mathrm{VE}}(t)\dot{\beta}^{\mathrm{VE}}(t)$ the process is known as Variance Exploding (Song + +and Ermon, 2019) and when $\alpha = 1$ , the process is known as Variance Preserving (Sohl-Dickstein et al., 2015; Ho et al., 2020). This transformation can be reversed (Anderson, 1982; Haussmann and Pardoux, 1986; Cattiaux et al., 2023) and is also governed by an SDE, known as the backward process + +$$ +\mathrm {d} \overleftarrow {X} _ {t} = \left(\alpha \beta (T - t) \overleftarrow {X} _ {t} + 2 \beta (T - t) \nabla \log p _ {T - t} (\overleftarrow {X} _ {t})\right) \mathrm {d} t + \sqrt {2 \beta (T - t)} \mathrm {d} B _ {t}, \quad \overleftarrow {X} _ {0} \sim p _ {T}, \tag {2} +$$ + +where $p_t$ is the time marginal p.d.f. of the forward process for $0 \leq t \leq T$ . As a consequence, $\overline{X}_T$ has the same distribution as $\pi_{\mathrm{data}}$ . In practice, however, one cannot draw exact i.i.d. samples from this continuous-time process, and implementations of SGMs rely on three key approximations. + +- Mixing error. The distribution of $\overrightarrow{X}_T$ is not analytically available in most cases, $\overleftarrow{X}_0$ is initialized at a known distribution $\pi_{\infty}$ , close to $p_T$ . +- Discretization error. In most cases, the backward dynamic is non-linear, the backward process is discretized to sample from $\overleftarrow{X}_T$ , which introduces an error due to evaluating the (time-continuous) score function only at discrete time steps. +- Approximation error. The score function depends on the unknown data distribution and thus cannot be computed in closed form. To approximate it, we use a neural network architecture $s_{\theta} : [0,T] \times \mathbb{R}^{d} \mapsto \mathbb{R}^{d}$ parameterized by $\theta \in \Theta$ , and trained, for example, via Denoising Score Matching (see, e.g., Vincent, 2011): + +$$ +\mathcal {L} _ {\mathrm {D S M}} (\theta) = \mathbb {E} \left[ \lambda (t) \left\| s _ {\theta} \left(\tau , \vec {X} _ {\tau}\right) - \nabla \log p _ {\tau} \left(\vec {X} _ {\tau} \mid \vec {X} _ {0}\right) \right\| ^ {2} \right], \tag {3} +$$ + +where $\tau$ is uniformly distributed on $[0,T]$ , $\tau$ is independent of $\vec{X}_0$ , $\vec{X}_{\tau} \sim p_{\tau}(\cdot|\vec{X}_0)$ and $\lambda : [0,T] \to \mathbb{R}_{>0}$ is a positive weighting function. + +Theoretical studies of SGMs focus on those sources of errors to derive results for the total variation distance (De Bortoli et al., 2021), the Kullback-Leibler divergence (Conforti et al., 2025; De Bortoli et al., 2021; Chen et al., 2023; Benton et al., 2024) or the Wasserstein-2 distance (Lee et al., 2022, 2023; Bruno et al., 2025; Gao et al., 2025; Strasman et al., 2025; Gentiloni-Silveri and Ocello, 2025). + +Kinetic Ornstein-Uhlenbeck. Inspired by Hamiltonian mechanics, kinetic SGMs operate in an extended position-velocity phase space, defined as $\vec{\mathbf{U}}_t = (\vec{X}_t,\vec{V}_t)^\top \in \mathbb{R}^{2d}$ which satisfies the following stochastic differential equation + +$$ +\mathrm {d} \overrightarrow {\mathbf {U}} _ {t} = A \overrightarrow {\mathbf {U}} _ {t} \mathrm {d} t + \Sigma \mathrm {d} B _ {t}, \quad \overrightarrow {\mathbf {U}} _ {0} \sim \pi_ {\text {d a t a}} \otimes \pi_ {v}, \tag {4} +$$ + +where $\pi_v\sim \mathcal{N}(0,v^2\mathbf{I}_d)$ $(B_{t})_{t\in [0,T]}$ denotes a 2d-dimensional standard Brownian motion, + +$$ +A = \left( \begin{array}{c c} 0 & a ^ {2} \\ - 1 & - 2 a \end{array} \right) \otimes \mathbf {I} _ {d}, \quad \text {a n d} \quad \Sigma = \left( \begin{array}{c c} 0 & 0 \\ 0 & \sigma \end{array} \right) \otimes \mathbf {I} _ {d}. \tag {5} +$$ + +Similar to (1), this process is Gaussian conditional on the distribution at time 0 (see Proposition A.2). The associated linear system corresponds to the stochastic analogue of a damped harmonic oscillator in the critically damped regime, with $a = 1 / \sqrt{M}$ and $\sigma = 2 / \sqrt{a}$ , following the parameterization of Dockhorn et al. (2022). Note that (4) can also be expressed using a time-change or noise-schedule function $\beta :[0,T]\to \mathbb{R}_+$ (see Section E.2). This will not play a key role in our theoretical analysis but is an important feature of practical numerical implementation. + +Applying time-reversal results for diffusion processes (see, e.g., Haussmann and Pardoux, 1986; Cattiaux et al., 2023), the backward process $(\overleftarrow{\mathbf{U}}_t)_{t\geq 0}$ is solution to the following SDE: + +$$ +\mathrm {d} \overleftarrow {\mathbf {U}} _ {t} = - A \overleftarrow {\mathbf {U}} _ {t} \mathrm {d} t + \Sigma^ {2} \nabla \log p _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) \mathrm {d} t + \Sigma \mathrm {d} B _ {t}, \tag {6} +$$ + +with initial condition $\overleftarrow{\mathbf{U}}_0\sim p_T$ , where $p_t:\mathbb{R}^{2d}\to \mathbb{R}_+$ is the probability density function of $\overrightarrow{\mathbf{U}}_t$ + +CLD-based SGMs. To sample from $\overleftarrow{\mathbf{U}}_t$ (and, in particular, from $\overleftarrow{X}_T \sim \pi_{\mathrm{data}}$ ), one must rely on the three SGM approximations discussed earlier. The mixing error is analogous to that of standard SGMs and leverages the ergodicity of the forward process—converging to a known Gaussian + +distribution—to initialize the backward process. The discretization of the nonlinear backward SDE can be performed using classical numerical integrators commonly employed in SGMs, such as Euler-Maruyama (Song et al., 2021) or exponential integrators (Conforti et al., 2025). Additionally, due to the Hamiltonian structure of the kinetic process, symplectic integrators (Neal, 2011) may also be appropriate (Dockhorn et al., 2022). Finally, the score approximation can be implemented by applying Denoising Score Matching—similar to (3)—on the extended phase space $\vec{\mathbf{U}}_t = (\vec{X}_t, \vec{V}_t)^\top$ , that is, using the conditional score function $\nabla \log p_t(\vec{\mathbf{U}}_t | \vec{\mathbf{U}}_0)$ . However, since the distribution of $\vec{V}_0$ is known and Gaussian, it can be analytically marginalized, yielding the following objective function known as Hybrid Score Matching: + +$$ +\mathcal {L} _ {\mathrm {H S M}} (\theta) = \mathbb {E} \left[ \lambda (t) \left\| s _ {\theta} (\tau , \overrightarrow {\mathbf {U}} _ {\tau}) - \nabla \log p _ {\tau} (\overrightarrow {\mathbf {U}} _ {\tau} \mid \overrightarrow {X} _ {0}) \right\| ^ {2} \right], +$$ + +where $\tau$ is uniformly distributed on $[0,T]$ , $\tau \perp \overrightarrow{X}_0$ , $\overrightarrow{\mathbf{U}}_{\tau} \sim p_{\tau}(\cdot|\overrightarrow{X}_0)$ and $\lambda : [0,T] \to \mathbb{R}_{>0}$ is a positive weighting function. Empirically, Hybrid Score Matching tends to yield more stable training dynamics by reducing the variance of the training objective (Dockhorn et al., 2022). + +# 3 Wasserstein Convergence of CLDs + +In this section, we analyze the convergence of CLDs with respect to the 2-Wasserstein distance under the Euler-Maruyama discretization scheme. We first discuss the motivation for this analysis before introducing the setting, assumptions, and main results. + +# 3.1 Motivation + +While convergence results have been established in terms of the Kullback-Leibler divergence (Conforti et al., 2025; Chen et al., 2023), no analogous results currently exist for the Wasserstein-2 metric. Proving convergence in $\mathcal{W}_2$ requires establishing a contraction property of the backward dynamics in this metric—a challenging task for hypo-coercive SDEs (Villani, 2009; Eberle et al., 2019; Monmarché, 2023). The main difficulty arises from the degeneracy of the diffusion term, since the Brownian motion in CLDs acts only on the velocity component. To illustrate this point, consider the following example. + +Introduce the change of variables $\vec{Y}_t = \vec{X}_t + a\vec{V}_t$ , under which one component of the system evolves as an Ornstein-Uhlenbeck process. Writing $\vec{Z}_t = (\vec{X}_t, \vec{Y}_t)^\top$ , the forward SDE in (4) can be rewritten as + +$$ +\mathrm {d} \vec {Z} _ {t} = a \left( \begin{array}{c c} - 1 & 1 \\ 0 & - 1 \end{array} \right) \vec {Z} _ {t} \mathrm {d} t + \left( \begin{array}{c c} 0 & 0 \\ 0 & \sigma \end{array} \right) \mathrm {d} B _ {t}. +$$ + +Notably, the transformed process $(\vec{Y}_t)_{t\in [0,T]}$ corresponds to an Ornstein-Uhlenbeck process. By the time-reversal property, the corresponding backward process satisfies + +$$ +\overleftarrow {Y} _ {t} = \overleftarrow {X} _ {t} + a \overleftarrow {V} _ {t}, +$$ + +which leads to the following backward SDE: + +$$ +\mathrm {d} \overleftarrow {Z} _ {t} = a \left( \begin{array}{c c} 1 & - 1 \\ 0 & 1 \end{array} \right) \overleftarrow {Z} _ {t} \mathrm {d} t + \sigma^ {2} \binom {0} {\nabla_ {y} \log p _ {T - t} (\overleftarrow {Z} _ {t})} \mathrm {d} t + \binom {0} {0} \mathrm {d} B _ {t}, \tag {7} +$$ + +where $p_t$ denotes the probability density function of $\vec{Z}_t$ . A standard approach to establishing contraction consists in studying the difference process associated with the dynamics in (7), starting from two deterministic initial conditions $(x_0, y_0), (x_0', y_0') \in \mathbb{R}^{2d}$ and denoting by $(X_t, Y_t)_{t \in [0,T]}$ and $(X_t', Y_t')_{t \in [0,T]}$ the corresponding solutions. Under a synchronous coupling-i.e. using the same Brownian motion to drive the evolution of both processes—the difference process becomes a deterministic ODE, whose stability properties determine the contraction properties of the system. In particular, using the mean value theorem applied to the gradients of the log-density, the following holds for $t \in [0;T]$ : + +$$ +\mathrm {d} \left( \begin{array}{c} X _ {t} - X _ {t} ^ {\prime} \\ Y _ {t} - Y _ {t} ^ {\prime} \end{array} \right) = \left( \begin{array}{c c} a + \sigma^ {2} G _ {t} & - a \\ 0 & a + \sigma^ {2} H _ {t} \end{array} \right) \left( \begin{array}{c} X _ {t} - X _ {t} ^ {\prime} \\ Y _ {t} - Y _ {t} ^ {\prime} \end{array} \right) \mathrm {d} t. \tag {8} +$$ + +where + +$$ +H _ {t} = \int_ {0} ^ {1} \nabla_ {y} ^ {2} \log p _ {T - t} \left(X _ {t} ^ {\prime} + \gamma \left(X _ {t} - X _ {t} ^ {\prime}\right), Y _ {t} ^ {\prime} + \gamma \left(Y _ {t} - Y _ {t} ^ {\prime}\right)\right) d \gamma , +$$ + +$$ +G _ {t} = \int_ {0} ^ {1} \nabla_ {y} \nabla_ {x} ^ {\top} \log p _ {T - t} \big (X _ {t} ^ {\prime} + \gamma (X _ {t} - X _ {t} ^ {\prime}), Y _ {t} ^ {\prime} + \gamma (Y _ {t} - Y _ {t} ^ {\prime}) \big) \mathrm {d} \gamma . +$$ + +To ensure contraction of the system, all eigenvalues of the matrix in (8) must be negative. However, the main difficulty lies in controlling the term $G_{t}$ , which involves the mixed second-order derivative $\nabla_y\nabla_x^\top \log p_{T - t}$ . For contraction to occur, this term must also be sufficiently negative. This is a strong and challenging requirement, as it demands a form of joint concavity of cross-derivatives, which is not generally ensured even when $p_{T - t}$ is strongly log-concave in each variable separately. + +# 3.2 Settings: Dynamics and Backward Discretization + +Position-noise regularization in the extended phase space. As detailed in Dalalyan and Riou-Durand (2020), kinetic Langevin-based samplers depend on the mixing rate and on the regularity of the underlying dynamics. To better exploit the extended phase space, we introduce a modified dynamics that adds a small noise term on the position coordinate $\varepsilon \geq 0$ . Crucially, when $\varepsilon$ is strictly positive, this modification restores ellipticity of the forward and backward processes, which facilitates greatly the theoretical analysis. This hyperparameter controls the smoothness of the sample paths and the analysis of the discretization error allows a practical tuning to improve sampling performance in comparison with standard SGM models and kinetic-based diffusion samplers. The diffusion coefficient of the forward SDE is then given by + +$$ +\Sigma_ {\varepsilon} := \left( \begin{array}{c c} \varepsilon & 0 \\ 0 & \sigma \end{array} \right) \otimes \mathbf {I} _ {d}, +$$ + +giving a process $(\vec{\mathbf{U}}_t)_{t\in [0,T]}\in \mathbb{R}^{2d}$ which satisfies the following SDE + +$$ +\mathrm {d} \vec {\mathbf {U}} _ {t} = A \vec {\mathbf {U}} _ {t} \mathrm {d} t + \Sigma_ {\varepsilon} \mathrm {d} B _ {t}, \quad \vec {\mathbf {U}} _ {0} \sim \pi_ {\text {d a t a}} \otimes \pi_ {v} \tag {9} +$$ + +with $\varepsilon \geq 0$ . Note that the case $\varepsilon = 0$ recovers the classical CLD framework. In the following, we write + +$$ +\mathbf {s} _ {t} (u) = \nabla \log p _ {t} (u), \quad \text {f o r} t \geq 0, u \in \mathbb {R} ^ {2 d}. \tag {10} +$$ + +Modified score function. Following Conforti et al. (2025), we adopt a modified score formulation based on the rescaled density $\tilde{p}_t\coloneqq p_t / p_\infty$ , where $p_{\infty}$ is the density of the stationary distribution associated with (4). This perspective, also emphasized in Cattiaux et al. (2023); Conforti and Léonard (2022); Strasman et al. (2025); Conforti et al. (2025); Gentiloni-Silveri and Ocello (2025); Pham et al. (2025), reveals deep connections with stochastic control theory. In particular, the modified score satisfies a Hamilton-Jacobi-Bellman (HJB) equation, which we highlight and exploit in the sequel. With this notation, the backward process $\overleftarrow{\mathbf{U}}$ can be written equivalently as + +$$ +\mathrm {d} \overleftarrow {\mathbf {U}} _ {t} = \tilde {A} _ {\epsilon} \overleftarrow {\mathbf {U}} _ {t} \mathrm {d} t + \Sigma_ {\varepsilon} ^ {2} \nabla \log \tilde {p} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) \mathrm {d} t + \Sigma_ {\varepsilon} \mathrm {d} B _ {t}, \tag {11} +$$ + +with $\tilde{A}_{\epsilon} = -A - \Sigma_{\epsilon}^{2}\Sigma_{\infty}^{-1}$ . In the following, we write $\tilde{\mathfrak{s}}_t(u)\coloneqq \nabla \log \tilde{p}_t(u)$ , for $t\geq 0$ , $u\in \mathbb{R}^{2d}$ . + +Backward process discretization. Let $N \in \mathbb{N}$ denote the number of discretization steps, so that $0 = t_0 < t_1 < \ldots < t_N = T$ . To analyze the convergence of the discretized backward process, we introduce the continuous-time interpolation $(\bar{\mathbf{U}}_t)_{t \in [0,T]}$ of the Euler scheme for the time-reversed process $(\overleftarrow{\mathbf{U}}_t)_{t \in [0,T]}$ . This is defined as the Itô process such that, for $t \in [t_k, t_{k+1}]$ , + +$$ +\bar {\mathbf {U}} _ {t} = \bar {\mathbf {U}} _ {t _ {k}} + \left(\tilde {A} _ {\epsilon} \bar {\mathbf {U}} _ {t _ {k}} + \Sigma_ {\varepsilon} ^ {2} \tilde {\mathbf {s}} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right)\right) (t - t _ {k}) + \Sigma_ {\varepsilon} \left(B _ {t} - B _ {t _ {k}}\right), \tag {12} +$$ + +where the process is initialized at $p_T$ (i.e., $\bar{\mathbf{U}}_0 \sim p_T$ ). When initialized at $\pi_{\infty}$ , we denote by $(\bar{\mathbf{U}}_t^\infty)_{t \in [0, T]}$ the corresponding Ito process. For simplicity, the discretization is performed on a uniform grid, with step size $h = T / N$ , so that $t_{k+1} - t_k = h$ for all $k$ . + +Generative model. The generative model is defined as the continuous-time interpolation of the discretized backward process, in which the true (unknown) modified score function is replaced by its parametric approximation $\tilde{s}_{\theta}:[0,T]\times \mathbb{R}^{2d}\mapsto \mathbb{R}^d$ . The resulting process, denoted by $(\bar{\mathbf{U}}_t^\theta)_{t\in [0,T]}$ , satisfies for $t\in [t_k,t_{k + 1}]$ + +$$ +\bar {\mathbf {U}} _ {t} ^ {\theta} = \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta} + \left(\tilde {A} _ {\epsilon} \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta} + \Sigma_ {\varepsilon} ^ {2} \tilde {s} _ {\theta} \left(t _ {k}, \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right)\right) (t - t _ {k}) + \Sigma_ {\varepsilon} \left(B _ {t} - B _ {t _ {k}}\right), \tag {13} +$$ + +with initialization $\bar{\mathbf{U}}_0^\theta \sim \pi_\infty$ . Learning the modified score function $\nabla \log \tilde{p}_t$ is theoretically equivalent to learning the standard score function $\nabla \log p_t$ , since the two functions differ only by a known linear term. As a consequence, the modified score approximation can be written, for any $t \geq 0$ and any $u \in \mathbb{R}^{2d}$ , as + +$$ +\tilde {s} _ {\theta} (t, u) := s _ {\theta} (t, u) + \Sigma_ {\infty} ^ {- 1} u. +$$ + +Ultimately, the objective is to control the $\mathcal{W}_2$ -distance between $\mathcal{L}(\bar{X}_T^\theta)$ the generated data marginal distribution at time $T$ and $\pi_{\mathrm{data}}$ the true data distribution (recall that $\bar{\mathbf{U}}_T^\theta = (\bar{X}_T^\theta, \bar{V}_T^\theta)^\top$ ). + +# 3.3 Assumptions + +# Regularity assumptions. + +H1 The data distribution $\pi_{\mathrm{data}}$ is absolutely continuous w.r.t. the Lebesgue measure, with density $p_{\mathrm{data}}$ and the relative Fisher information between $\pi_0 = \pi_{\mathrm{data}}\otimes \pi_v$ (i.e. the initialization of the stochastic process defined in (4)) and $\pi_{\infty}$ is finite, i.e. + +$$ +\mathcal {I} \left(\pi_ {0} \mid \pi_ {\infty}\right) := \int \left\| \nabla \log \left(\frac {\mathrm {d} \pi_ {0}}{\mathrm {d} \pi_ {\infty}} (u)\right) \right\| ^ {2} \pi_ {0} (\mathrm {d} u) < \infty . +$$ + +Assumption H1, particularly the requirement of finite Fisher information, is standard in most works establishing convergence bounds for SGMs. This condition is either explicitly assumed or implied by stronger regularity assumptions used in the literature (Conforti et al., 2025; Strasman et al., 2025). + +H2 (i) The data distribution is of the form $p_{\mathrm{data}}(x) \propto \exp \left(-\left(V(x) + H(x)\right)\right)$ and satisfies: + +* There exists $L > 0$ such that $|H(x) - H(y)| \leq L \| x - y \|$ for all $x, y \in \mathbb{R}^d$ . +* There exists $\alpha > 0$ such that $\alpha \mathbf{I}_d \preceq \nabla^2 V(x)$ for all $x \in \mathbb{R}^d$ . + +(ii) $(- \log p_{\mathrm{data}})$ is $L_0$ -one-sided Lipschitz, i.e., for all $x, y \in \mathbb{R}^d$ + +$$ +- \left(\nabla \log p _ {\text {d a t a}} (x) - \nabla \log p _ {\text {d a t a}} (y)\right) ^ {\top} (x - y) \leq L _ {0} \| x - y \| ^ {2}. \tag {14} +$$ + +The first point of Assumption H2 models the data distribution as a strongly log-concave component $V$ perturbed by a term $H$ , similar to the settings considered in Brigati and Pedrotti (2025); Stéphanovitch (2025). Intuitively, this assumption allows the distribution to deviate from strong log-concavity via the perturbation $H$ , while still maintaining sufficient regularity for the analysis. When $H = 0$ , the distribution reduces to the strongly log-concave case, which is commonly used to establish contraction in the Wasserstein metric (Bruno et al., 2025; Gao et al., 2025; Strasman et al., 2025). The second point of Assumption H2 assumes a one-sided Lipschitz condition, which is weaker than requiring full Lipschitz continuity of the score function (Gentiloni-Silveri and Ocello, 2025). Notably, H2 implies the Lipschitz continuity of the score function. This means that for all $t \in (0, T]$ , there exists $L_t > 0$ such that for all $u, \bar{u} \in \mathbb{R}^{2d}$ + +$$ +\left\| \mathbf {s} _ {t} (u) - \mathbf {s} _ {t} (\bar {u}) \right\| \leq L _ {t} \| u - \bar {u} \| . \tag {15} +$$ + +This condition can be verified under standard assumptions. In particular, if $\nabla \log p_{\mathrm{data}}$ is Lipschitz, the assumption holds. Since $\pi_{\mathrm{data}}$ and $\pi_v$ are independent and $\pi_v$ is often Gaussian, it suffices to assume that $p_{\mathrm{data}}$ is log-smooth, a common condition in the analysis of SGMs to ensure convergence (Gao et al., 2025; Chen et al., 2023). + +Furthermore, Assumption H2 ensures that $\pi_{\mathrm{data}}$ has sub-Gaussian tails (Lemma D.1). Consequently, all its polynomial moments are finite. In particular, $\pi_{\mathrm{data}}$ admits a finite second moment, a standard condition—either explicit or implied by stronger regularity assumptions—in convergence analyses of SGMs. Importantly, Assumption H2, together with the polynomial growth condition $\| \nabla V(x)\| \leq C(1 + \| x\|^m)$ for all $x\in \mathbb{R}^d$ , with some $C > 0$ and $m\in \mathbb{N}$ , implies Assumption H1 (Lemma D.2). + +These assumptions are satisfied by standard distributions such as Gaussian and mixtures of Gaussians. They are strictly weaker than the conditions typically required in the literature to establish Wasserstein convergence guarantees—such as strong log-concavity combined with the Lipschitz continuity of the score function (Gao et al., 2025; Strasman et al., 2025)—which hold only for non-degenerate Gaussian distributions and therefore exclude many practically relevant settings, even though they remain common in the literature. + +# Score approximation. + +H3 There exists $M\geq 0$ such that, + +$$ +\sup _ {k \in \{0,.., N - 1 \}} \left\| \tilde {\mathbf {s}} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right) - \tilde {s} _ {\theta} \left(T - t _ {k}, \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right) \right\| _ {L _ {2}} \leq M. +$$ + +Assumption H3 is standard in the literature (De Bortoli et al., 2021; Conforti and Léonard, 2022; Gao et al., 2025; Bruno et al., 2025; Strasman et al., 2025; Gentiloni-Silveri and Ocello, 2025; Cordero-Encinar et al., 2025) as essentially all convergence proofs for diffusion-based score models require that the neural network has learned the score within some uniform error. This condition quantifies the ability of the neural network architecture to approximate the true score function and serves to control the score approximation error. + +# 3.4 Main Results + +We establish here the Wasserstein-2 convergence of CLD-based SGMs under these weak assumptions. A key step is to show that, under Assumption H2, the scaled score function $\Sigma_{\varepsilon}^{2}\nabla \log p_{t}$ (resp. $\Sigma_{\varepsilon}^{2}\nabla \log \tilde{p}_{t}$ ) is $L_{t}$ -Lipschitz (resp. $\tilde{L}_t$ -Lipschitz), for $t > 0$ (Proposition B.1). This, in particular, yields an exponential decay of the operator norm $\| \Sigma_{\varepsilon}^{2}\nabla^{2}\log \tilde{p}_{t}\|$ as $t\to \infty$ . The following theorem provides, to the best of our knowledge, the first convergence rates in Wasserstein distance for CLD-based approaches and aligns with recent developments in the literature of Variance-Preserving and Variance-Exploding SGMs. + +Theorem 3.1. Assume that Assumptions H1- H3 hold. Then, there exist $c_{1}, c_{2} > 0$ such that, for all $h > 0$ , + +$$ +\mathcal {W} _ {2} \left(\pi_ {\text {d a t a}}, \mathcal {L} \left(\bar {X} _ {T} ^ {\theta}\right)\right) \leq c _ {1} \mathrm {e} ^ {- c _ {2} T} \mathcal {W} _ {2} \left(\pi_ {\text {d a t a}} \otimes \pi_ {v}, \pi_ {\infty}\right) + c _ {1} \sigma^ {2} M + c _ {1} \sqrt {h}. +$$ + +Proof. Let $P_X: \mathbb{R}^{2d} \to \mathbb{R}^d$ denote the projection $P_X(x, v) = x$ . Using that $P_X$ is 1-Lipschitz for the Euclidean norm, yields, + +$$ +\mathcal {W} _ {2} \left(\pi_ {\text {d a t a}}, \mathcal {L} \left(\bar {X} _ {T} ^ {\theta}\right)\right) \leq \mathcal {W} _ {2} \left(\pi_ {\text {d a t a}} \otimes \pi_ {v}, \mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\theta}\right)\right). \tag {16} +$$ + +The right-hand side of (16) is then bounded by decomposing the total generation error, using the triangle inequality, into the three sources of error for SGMs discussed in Section 2: + +$$ +\begin{array}{l} \mathcal {W} _ {2} \left(\pi_ {\text {d a t a}} \otimes \pi_ {v}, \mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\theta}\right)\right) \leq \mathcal {W} _ {2} \left(\mathcal {L} \left(\overleftarrow {\bar {\mathbf {U}}} _ {T}\right), \mathcal {L} \left(\bar {\mathbf {U}} _ {T}\right)\right) + \mathcal {W} _ {2} \left(\mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\infty}\right), \mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\theta}\right)\right) \\ + \mathcal {W} _ {2} \left(\mathcal {L} \left(\bar {\mathbf {U}} _ {T}\right), \mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\infty}\right)\right), \\ \end{array} +$$ + +where $\bar{\mathbf{U}}_T$ and $\bar{\mathbf{U}}_T^\infty$ are defined in Equation (12) and $\bar{\mathbf{U}}_T^\theta$ in Equation (13). The first term (discretization error) is controlled by Lemma B.2, which ensures that there exists $c_{1} > 0$ such that, for all $h > 0$ , + +$$ +\mathcal {W} _ {2} \left(\mathcal {L} \left(\overleftarrow {\mathbf {U}} _ {T}\right), \mathcal {L} \left(\bar {\mathbf {U}} _ {T}\right)\right) \leq c _ {1} \sqrt {h}. +$$ + +The second term (score approximation error) is bounded by Lemma B.3, + +$$ +\mathcal {W} _ {2} \left(\mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\infty}\right), \mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\theta}\right)\right) \leq c _ {1} \sigma^ {2} M. +$$ + +Finally, the third term (mixing error) is controlled by Lemma B.4, which guarantees the existence of $c_2 > 0$ such that, + +$$ +\mathcal {W} _ {2} \left(\mathcal {L} \left(\bar {\mathbf {U}} _ {T}\right), \mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\infty}\right)\right) \leq c _ {1} \mathrm {e} ^ {- c _ {2} T} \mathcal {W} _ {2} \left(\pi_ {\text {d a t a}} \otimes \pi_ {v}, \pi_ {\infty}\right). +$$ + +Combining these three bounds together with (16) concludes the proof. + +Theorem 3.1 establishes convergence rates in the Wasserstein distance for CLD-based approaches for all $\epsilon \geq 0$ , recovering the vanilla CLD when $\epsilon = 0$ . In this case, our result aligns with the KL convergence analyses of kinetic Langevin dynamics by Chen et al. (2023) and Conforti et al. (2025) for the specific choice $a = 1$ and $\sigma = 2$ . It is worth emphasizing that, under our weaker assumptions, no equivalence holds between KL and Wasserstein convergence, so our results are not implied by existing KL-based analyses. Beyond this theoretical bound, our analysis indicates that smaller values of $v$ yield better log-concavity constants; however, $v$ is typically chosen to be small but not too small, to avoid an explosion in the Lipschitz constant. This remark is consistent with the empirical evidence brought forward by Dockhorn et al. (2022), which suggests that small values of $v$ may improve training stability and sampling performance. + +# 3.5 Strongly Log-Concave Case + +This subsection focuses on the elliptic case, i.e., when $\varepsilon > 0$ . In this setting, the forward process associated with CLD becomes a multidimensional Ornstein-Uhlenbeck process with matrix-valued drift and diffusion coefficients. The presence of the additional noise term on the position coordinate restores ellipticity, which allows us to extend classical convergence analyses developed for VP and VE diffusions to this kinetic framework. + +Crucially, in the strongly log-concave case, i.e., when $H = 0$ in Assumption H2, the upper bound can be expressed with more explicit constants that depend on the regularity of the data. Moreover, in this case, the one-sided Lipschitz condition becomes equivalent to the Lipschitz continuity of the score function. Assumption H2 then reduces to the following assumption. + +$\mathbf{H2}^{\prime}$ The data distribution is absolutely continuous w.r.t. the Lebesgue measure, is of the form $p_{\mathrm{data}}(x)\propto \mathrm{e}^{-V(x)}$ and is $\alpha_0$ -strongly log-concave and $L_{0}$ -log-smooth, i.e., there exists $\alpha_0 > 0$ and $L_0 > 0$ such that, + +$$ +\alpha_ {0} \mathbf {I} _ {d} \preceq \nabla^ {2} V (x) \preceq L _ {0} \mathbf {I} _ {d}, \quad \text {f o r a l l} x \in \mathbb {R} ^ {d}. +$$ + +Under this assumption, the forward flow preserves both strong log-concavity and smoothness. Indeed, Propositions C.1 and C.2 guarantee that $p_t$ remains $\alpha_t$ -log-concave and $L_t$ -log-smooth for all $t \in [0, T]$ , with $\alpha_t$ and $L_t$ explicitly defined as functions of $\alpha_0$ and $L_0$ in the respective propositions. Such regularity properties are fundamental for proving exponential contraction in the Wasserstein metric, and are consistent with the analysis of classical (VP) diffusion models (Bruno et al., 2025; Gao et al., 2025; Strasman et al., 2025). In contrast, Chen et al. (2023) obtain Wasserstein convergence guarantees without requiring strong log-concavity, instead relying on the compactness of the domain and (15), a setting where convergence in KL divergence is effectively equivalent. The following theorem presents Wasserstein convergence results under assumptions for which no such equivalence with the KL divergence holds. In particular, our result is not implied by existing analyses based on KL convergence. + +Theorem 3.2. Assume that $H2'$ and $H3$ hold, and let $\varepsilon > 0$ . If the step size $h$ satisfies + +$$ +0 < h < \frac {2 \min _ {k} \alpha_ {t _ {k}} \left(\sigma^ {2} \wedge \varepsilon^ {2}\right) - (\sigma - \varepsilon) ^ {2} \max _ {k} L _ {t _ {k}} - (a + 1) ^ {2}}{\| A \| ^ {2} + (\varepsilon^ {4} + \sigma^ {4}) \max _ {k} L _ {t _ {k}} ^ {2} + 2 (\sigma^ {2} \vee \varepsilon^ {2}) \| A \| \max _ {k} L _ {t _ {k}}}, \tag {17} +$$ + +then, + +$$ +\mathcal {W} _ {2} \left(\pi_ {\mathrm {d a t a}}, \mathcal {L} \left(\bar {X} _ {T} ^ {\theta}\right)\right) \lesssim K _ {T} \mathrm {e} ^ {- a T} \mathcal {W} _ {2} \left(\pi_ {\mathrm {d a t a}} \otimes \pi_ {v}, \pi_ {\infty}\right) + \sigma^ {2} M + \sqrt {h} C _ {a} (\varepsilon). +$$ + +where $K_{T} = (1 + \max \{a + 1;a(a + 1)\} T)$ and + +$$ +C _ {a} (\varepsilon) = \Big (2 \| A \| ^ {4} B _ {\varepsilon} + 4 d (a ^ {2} \sigma^ {2} + \varepsilon) ^ {2} \Lambda_ {\varepsilon} ^ {*} (T) \Big) h + 4 d \Big (\| A \| ^ {2} + \sigma^ {4} \sup _ {t \in [ 0, T ]} L _ {T - t} ^ {2} \Big), +$$ + +with $B_{\varepsilon}$ and $\Lambda_{\varepsilon}^{*}(T)$ as in Lemma C.3. + +Proof. The error decomposition is the same as in Theorem 3.1. The full statement and proof for each error term is provided in Appendix C. + +This bound highlights the stabilizing role of the parameter $\varepsilon > 0$ , which restores ellipticity in the dynamics. A key observation is that + +$$ +\Sigma_ {\varepsilon} \nabla^ {2} \log p _ {t} \Sigma_ {\varepsilon} \preccurlyeq - (\varepsilon^ {2} \wedge \sigma^ {2}) \alpha_ {t} \mathbf {I} _ {2 d}, +$$ + +which can be negative only for positive values of $\varepsilon$ . In this sense, increasing $\varepsilon$ tends to enhance the contractive behavior of the dynamics, as also reflected by the admissible step-size condition (17). However, this effect is not purely beneficial: several terms in the discretization error scale with $\varepsilon^2$ , illustrating that excessive noise injection may deteriorate the regularity of the process. Consequently, there is a trade-off in the choice of $\varepsilon$ to balance these competing effects. This trade-off is numerically illustrated in Section 4. + +Remark 3.3. Finite second order moment is also necessary in this approach and is deduced from $\mathrm{H}2^{\prime}$ (see, e.g., Lemma B.1, Gentiloni-Silveri and Ocello, 2025). Regarding H3, it is implied that the score approximation is now made for the true score function, not the modified one. + +# 4 Experiments + +We illustrate the effect of the regularization parameter $\varepsilon$ on the generation quality of CLDs on a simple yet challenging toy dataset. The regularization parameter $\varepsilon$ is chosen to be in $\{0, 0.1, 0.25, 0.5, 1\}$ . Notably, $\varepsilon = 0$ corresponds to the vanilla CLD setting. Our source code is publicly available here1. + +Evaluation metric. To assess the quality of the generated samples, directly computing the Wasserstein-2 distance is infeasible, as it requires solving a computationally expensive optimal transport problem. Instead, we approximate the $\mathcal{W}_2$ -distance between the generated samples (with distribution $\hat{\pi}$ ) and the training samples (with distribution $\pi$ data) using the sliced Wasserstein distance (Flamary et al., 2021). It is defined as $SW_2^2 (\pi_{\mathrm{data}},\hat{\pi}) = \mathbb{E}_{\mathbf{u}\sim \mathcal{U}(\mathbb{S}^{d - 1})}[\mathcal{W}_2^2 (\mathbf{u}_{\#}\pi_{\mathrm{data}},\mathbf{u}_{\#}\hat{\pi})]$ where $\mathcal{U}(\mathbb{S}^{d - 1})$ denotes the uniform distribution over the unit sphere and $\mathbf{u}_{\#}$ is the push-forward operator associated with $\mathbf{u}$ . The expectation is approximated using the standard Monte Carlo method with 2000 projections and $\pi_{\mathrm{data}}$ and $\hat{\pi}$ are replaced by their empirical distributions. + +Dataset. We evaluate the generation quality on the Funnel distribution, which is characterized by a strong imbalance in variance across dimensions and was previously used in Thin et al. (2021). To further illustrate our results, we extend the evaluation to two additional challenging toy datasets (Appendix E.5): MG-25 (a 25-mode, 100-dimensional Gaussian mixture) and Diamonds (a 2-dimensional Gaussian mixture with a diamond-shaped geometry). + +Hybrid Score Matching. Following the insights of Ho et al. (2020), the networks are trained to predict the noise (or rescaled noise) added during the forward process. When $\varepsilon = 0$ , we use the positive weighting function proposed by Dockhorn et al. (2022) (see page 5, $\lambda(t) = \ell_t^{-2}$ , in Dockhorn et al. (2022)). A similar reweighting, however, is not feasible for $\varepsilon \neq 0$ due to the matrix-valued nature of the objective function. Empirically, we observe that much of the training variance arises from the determinant computation involved in the $2 \times 2$ matrix inversions. To mitigate this, we set $\lambda(t) = \operatorname*{det}(\Sigma_{0,t})^2$ , which effectively stabilizes training. We parameterize the score network as $s_\theta(\vec{\mathbf{U}}_t, t) := -\Sigma_{0,t}^{-1}\alpha_\theta(\vec{\mathbf{U}}_t, t)$ so that the hybrid score matching objective for $\varepsilon > 0$ is given by + +$$ +\mathcal {L} _ {\left(\mathrm {H S M}\right) ^ {\varepsilon}} (\theta) = \mathbb {E} \left[ \det \left(\Sigma_ {0, t}\right) ^ {2} \left\| \Sigma_ {0, t} ^ {- 1} \left(s _ {\theta} \left(\tau , \overrightarrow {X} _ {\tau}\right) - \Sigma_ {0, t} ^ {1 / 2} G _ {2 d}\right) \right\| ^ {2} \right], \tag {18} +$$ + +where $G_{2d}$ denotes a 2d-dimensional standard Gaussian noise. + +Model, Training and Generation. All score networks share the same architecture: a fully connected neural network with three hidden layers of width 512 (see Figure 3). Training is performed using the Adam optimizer to minimize the hybrid score matching objective in (18), with a learning rate of $10^{-4}$ over 2000 epochs. The training set consists of 50 000 samples. For evaluation, we generate 50 000 samples using the Euler-Maruyama discretization scheme with $N = 1000$ steps and compare them against a test set of 50 000 samples. Both training and generation are independently repeated five times. The training (Algorithm 1) and sampling (Algorithm 2) procedures are provided in Appendix E.1. + +Effect of the regularization parameter. Figure 1 illustrates the Wasserstein error for different values of the regularization parameter $\varepsilon \in \{0,0.1,0.25,0.5,1\}$ and drift coefficient $a \in \{0.1,0.25,0.5,1,2\}$ . Across all values of $a$ , introducing a small regularization parameter $\varepsilon$ notably improves generation quality, even though the score network in the regularized case must predict a vector twice as long as in the non-regularized one. Moreover, regularization consistently reduces the + +variance across runs. For smaller values of $a$ , the error increases sharply when $\varepsilon = 0$ and also for large $\varepsilon$ values. In contrast, for moderate values of $a$ , the error becomes negligible, with $\varepsilon \in [0.1, 0.5]$ yielding slightly better performance than the other settings. It is worth noting that our experimental configuration closely follows that of Dockhorn et al. (2022), using $\sigma = \sqrt{2}$ , $a = 2$ , and in particular $\varepsilon = 0$ . This observation justifies their choice of $a = 2$ for the vanilla CLD. + +# Effect of $\varepsilon$ in controlled settings. + +Varying $\varepsilon$ modifies both the stationary distribution and the noise schedule—two factors known to strongly influence performance (Guo et al., 2023; Chen et al., 2023; Strasman et al., 2025)—it is important to control for these effects. To mitigate this confounding factor, one can fix the stationary distribution of the base case to $\mathcal{N}(0_{2d},\mathbf{I}_{2d})$ and maintain comparable noise levels in the position and velocity spaces by setting $a(\varepsilon) = 1 - \varepsilon^2 /2$ and $\sigma (\varepsilon) = \sqrt{4 + \varepsilon^2}$ . This choice ensures that the stationary distribution remains close to $\mathcal{N}(0_{2d},\mathbf{I}_{2d})$ for small values of $\varepsilon$ . Although this adjustment becomes less accurate for larger $\varepsilon$ , there is no practical limitation preventing the use of higher regularization values. + +Figure 2 still shows an improvement in generation quality for small regularization parameters $\varepsilon$ . To confirm that this effect is not tied to the discretization method, we reproduce the experiments using a Leapfrog integrator. As expected, + +the Leapfrog scheme outperforms Euler-Maruyama, yet the relative benefit of regularization persists. Finally, we emphasize that our objective is not to conduct an extensive numerical comparison of integrators or training strategies, but rather to highlight the potential of introducing controlled regularization within the CLD framework—a direction theoretically supported by Theorem 3.2. + +![](images/4ca53a2512c98fcc1f4824b92cfb6113c473fb96705cb5d241b51aee814c382c.jpg) +Figure 1: Mean $\mathcal{W}_2$ distance over 5 repetitions between the test set and generated samples on Funnel distribution in dimension 100. Error bars represent $\pm$ one standard deviation. + +![](images/5dfb8cdc66237023a0286e3b58c88db56088e1f97338488c64af1efdcdeeee8a.jpg) +Figure 2: Mean $\mathcal{W}_2$ distance over 5 repetitions between the test set and generated samples on Funnel distribution with $a(\varepsilon) = 1 - \varepsilon^2 /2$ and $\sigma (\varepsilon) = \sqrt{4 + \varepsilon^2}$ + +# 5 Discussion + +In this paper, we present the first theoretical analysis of the sampling error of CLDs in the Wasserstein metric under weaker assumptions than those previously used in the literature. Our results show that CLD-based samplers can achieve comparable convergence rates while effectively leveraging the structure of the extended space. We further analyze a generalized dynamic that extends classical CLDs by introducing a smoothness-controlling hyperparameter that regulates the noise on the data coordinate. This parameter provides more precise control over the regularity of sample paths and plays a central role in the discretization error analysis. Both theoretical and empirical results suggest that appropriately tuning this parameter leads to improved sampling quality and stability. Overall, our work offers both theoretical insights and practical guidance for CLD methods in generative modeling, particularly in scenarios where standard assumptions may not hold. Several promising directions remain for future research. Replacing the Euler discretization scheme with a higher-order method—such as the Leapfrog integrator, which is specifically designed for CLD-based dynamics—could further enhance sampling performance. Analyzing such schemes would likely yield sharper convergence rates consistent with the numerical results. Moreover, developing denoiser architectures specifically tailored to the extended space represents another promising avenue for applied research, potentially leading to tighter bounds on the approximation error. + +# Acknowledgements + +The PhD of Sobihan Surendran was funded by the Paris Region PhD Fellowship Program of Région Ile-de-France. The work of Antonio Ocello was funded by the European Union (ERC-2022-SYG-OCEAN-101071601). Views and opinions expressed are however those of the author only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. We would also like to thank SCAI (Sorbonne Center for Artificial Intelligence) for providing the computing clusters. + +# References + +Achleitner, F., Arnold, A., and Stürzer, D. (2015). Large-time behavior in non-symmetric fokkerplanck equations. In Rivista di Matematica della Università di Parme, pages 1-68. +Anderson, B. D. (1982). Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12(3):313-326. +Benton, J., De Bortoli, V., Doucet, A., and Deligiannidis, G. (2024). Nearly $d$ -linear convergence bounds for diffusion models via stochastic localization. In International Conference on Learning Representations. +Bouchut, F., James, F., and Mancini, S. (2005). Uniqueness and weak stability for multi-dimensional transport equations with one-sided lipschitz coefficient. Annali della Scuola Normale Superiore di Pisa-Classe di Scienze, 4(1):1-25. +Brascamp, H. J. and Lieb, E. H. (1976). On extensions of the brunn-minkowski and prékopa-leindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation. Journal of Functional Analysis, 22:366-389. +Brigati, G. and Pedrotti, F. (2025). Heat flow, log-concavity, and lipschitz transport maps. *Electronic Communications in Probability*, 30:1-12. +Bruno, S., Zhang, Y., Lim, D.-Y., Akyildiz, Ö. D., and Sabanis, S. (2025). On diffusion-based generative models and their error bounds: The log-concave case with full convergence estimates. Transactions on Machine Learning Research. +Cattiaux, P., Conforti, G., Gentil, I., and Léonard, C. (2023). Time reversal of diffusion processes under a finite entropy condition. Annales de l'Institut Henri Poincaré (B) Probabilités et Statistiques, 59(4):1844-1881. +Chen, S., Chewi, S., Li, J., Li, Y., Salim, A., and Zhang, A. (2023). Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. In International Conference on Learning Representations. +Conforti, G., Durmus, A., and Silveri, M. G. (2025). KL convergence guarantees for score diffusion models under minimal data assumptions. SIAM Journal on Mathematics of Data Science, 7(1):86-109. +Conforti, G. and Léonard, C. (2022). Time reversal of markov processes with jumps under a finite entropy condition. Stochastic Processes and their Applications, 144:85-124. +Cordero-Encinar, P., Akyildiz, O. D., and Duncan, A. B. (2025). Non-asymptotic analysis of diffusion annealed Langevin monte carlo for generative modelling. arXiv preprint arXiv:2502.09306. +Dalalyan, A. S. and Riou-Durand, L. (2020). On sampling from a log-concave density using kinetic Langevin diffusions. Bernoulli, 26(3):1956-1988. +De Bortoli, V., Thornton, J., Heng, J., and Doucet, A. (2021). Diffusion schrödinger bridge with applications to score-based generative modeling. In Advances in Neural Information Processing Systems, volume 34, pages 17695-17709. + +Dockhorn, T., Vahdat, A., and Kreis, K. (2022). Score-based generative modeling with critically-damped Langevin diffusion. In International Conference on Learning Representations. +Eberle, A., Guillin, A., and Zimmer, R. (2019). Couplings and quantitative contraction rates for Langevin dynamics. The Annals of Probability, 47(4):1982-2010. +Flamary, R., Courty, N., Gramfort, A., Alaya, M. Z., Boisbunon, A., Chambon, S., Chapel, L., Corenflos, A., Fatras, K., Fournier, N., Gautheron, L., Gayraud, N. T., Janati, H., Rakotomamonjy, A., Redko, I., Rolet, A., Schutz, A., Seguy, V., Sutherland, D. J., Tavenard, R., Tong, A., and Vayer, T. (2021). Pot: Python optimal transport. Journal of Machine Learning Research, 22(78):1-8. +Gao, X., Nguyen, H. M., and Zhu, L. (2025). Wasserstein convergence guarantees for a general class of score-based generative models. Journal of Machine Learning Research, 26(43):1-54. +Gentiloni-Silveri, M. and Ocello, A. (2025). Beyond log-concavity and score regularity: Improved convergence bounds for score-based generative models in w2-distance. In International Conference on Machine Learning. +Gong, S., Li, M., Feng, J., Wu, Z., and Kong, L. (2023). Diffuseq: Sequence to sequence text generation with diffusion models. In International Conference on Learning Representations. +Guo, Q., Liu, S., Yu, Y., and Luo, P. (2023). Rethinking the noise schedule of diffusion-based generative models. arXiv preprint arXiv:2309.12345. +Haussmann, U. G. and Pardoux, E. (1986). Time reversal of diffusions. The Annals of Probability, pages 1188-1205. +Ho, J., Jain, A., and Abbeel, P. (2020). Denoising diffusion probabilistic models. In Advances in neural information processing systems, volume 33, pages 6840-6851. +Hyvarinen, A. and Dayan, P. (2005). Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4). +Lee, H., Lu, J., and Tan, Y. (2022). Convergence for score-based generative modeling with polynomial complexity. In Advances in Neural Information Processing Systems, volume 35, pages 22870-22882. +Lee, H., Lu, J., and Tan, Y. (2023). Convergence of score-based generative modeling for general data distributions. In International Conference on Algorithmic Learning Theory, pages 946-985. PMLR. +Li, H., Yang, Y., Chang, M., Chen, S., Feng, H., Xu, Z., Li, Q., and Chen, Y. (2022). Srdiff: Single image super-resolution with diffusion probabilistic models. Neurocomputing, 479:47-59. +Lugmayr, A., Danelljan, M., Romero, A., Yu, F., Timofte, R., and Van Gool, L. (2022). Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11461-11471. +Monmarché, P. (2023). Almost sure contraction for diffusions on $\mathbb{R}^d$ . application to generalised Langevin diffusions. Stochastic Processes and their Applications, 161:316-349. +Moufad, B., Janati, Y., Bedin, L., Durmus, A., Douc, R., Moulines, E., and Olsson, J. (2025). Variational diffusion posterior sampling with midpoint guidance. In International Conference on Learning Representations. +Neal, R. M. (2011). MCMC using hamiltonian dynamics. In Handbook of Markov Chain Monte Carlo, volume 54, chapter 5, pages 113-162. Chapman & Hall/CRC Press. +Nichol, A. Q. and Dhariwal, P. (2021). Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pages 8162-8171. PMLR. +Pham, L.-T.-N., Shariatian, D., Ocello, A., Conforti, G., and Durmus, A. (2025). Bit-level discrete diffusion with markov probabilistic models: An improved framework with sharp convergence bounds under minimal assumptions. In International Conference on Machine Learning. + +Saumard, A. and Wellner, J. A. (2014). Log-concavity and strong log-concavity: A review. Statistics Surveys, 8(none):45 - 114. +Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., and Ganguli, S. (2015). Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pages 2256-2265. PMLR. +Song, Y. and Ermon, S. (2019). Generative modeling by estimating gradients of the data distribution. In Advances in Neural Information Processing Systems, volume 32, pages 11918-11930. +Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., and Poole, B. (2021). Score-based generative modeling through stochastic differential equations. International Conference on Learning Representations. +Stéphanovitch, A. (2025). Regularity of the score function in generative models. arXiv preprint arXiv:2506.19559. +Strasman, S., Ocello, A., Boyer, C., Le Corff, S., and Lemaire, V. (2025). An analysis of the noise schedule for score-based generative models. Transactions on Machine Learning Research. +Thin, A., Janati El Idrissi, Y., Le Corff, S., Ollion, C., Moulines, E., Doucet, A., Durmus, A., and Robert, C. (2021). NEO: Non equilibrium sampling on the orbits of a deterministic transform. In Advances in Neural Information Processing Systems, volume 34, pages 17060-17071. +Victorino Cardoso, G., Janati El Idrissi, Y., Le Corff, S., and Moulines, E. (2024). Monte Carlo guided diffusion for Bayesian linear inverse problems. In International Conference on Learning Representations. +Villani, C. (2009). Hypocoercivity, volume 202 of Memoirs of the American Mathematical Society. American Mathematical Society. +Vincent, P. (2011). A connection between score matching and denoising autoencoders. *Neural Computation*, 23(7):1661-1674. +Wu, L., Trippe, B., Naesseth, C., Blei, D., and Cunningham, J. P. (2023). Practical and asymptotically exact conditional sampling in diffusion models. In Advances in Neural Information Processing Systems, volume 36, pages 31372-31403. +Yang, L., Zhang, Z., Song, Y., Hong, S., Xu, R., Zhao, Y., Zhang, W., Cui, B., and Yang, M.-H. (2023). Diffusion models: A comprehensive survey of methods and applications. ACM Computing Surveys, 56(4):1-39. + +# Supplementary Material for "Wasserstein Convergence of Critically Damped Langevin Diffusions" + +# Table of Contents + +1 Introduction 1 +2 Notation and Background 2 +3 Wasserstein Convergence of CLDs 4 + +3.1 Motivation 4 +3.2 Settings: Dynamics and Backward Discretization 5 +3.3Assumptions 6 +3.4 Main Results 7 +3.5 Strongly Log-Concave Case 8 + +4 Experiments 9 +5 Discussion 10 + +A Forward process of Critically-Damped dynamics 15 +B Proof of Theorem 3.1 18 + +B.1 Propagation of the regularity assumptions 18 +B.2 Proofs of the main results 21 + +C Proof of Theorem 3.2 28 + +C.1 Propagation of the regularity assumptions 29 +C.2 Proofs of the main results 30 + +D Technical Lemmata 35 +E Numerical Illustration 40 + +E.1 CLD training and sampling 40 +E.2 Time-rescaling of the forward SDE 40 +E.3 Score approximation 41 +E.4 Neural network architectures 41 +E.5 Additional experiments 42 + +# A Forward process of Critically-Damped dynamics + +In this section, we establish several mathematical properties of the forward processes: + +$$ +\mathrm {d} \overrightarrow {\mathbf {U}} _ {t} = A \overrightarrow {\mathbf {U}} _ {t} \mathrm {d} t + \Sigma_ {\varepsilon} \mathrm {d} B _ {t}, \quad \overrightarrow {\mathbf {U}} _ {0} \sim \pi_ {\text {d a t a}} \otimes \pi_ {v}, +$$ + +as defined in (4) with $\epsilon = 0$ or in (9) with $\epsilon \geq 0$ . These results will be used throughout our subsequent analysis. + +Lemma A.1. Let $A$ be the matrix defined in (5), then + +$$ +A = \left(\left( \begin{array}{c c} - a & - 1 \\ 1 & 0 \end{array} \right) \times \left( \begin{array}{c c} - a & 1 \\ 0 & - a \end{array} \right) \times \left( \begin{array}{c c} 0 & 1 \\ - 1 & - a \end{array} \right)\right) \otimes \mathbf {I} _ {d} +$$ + +so that + +$$ +\mathrm {e} ^ {t A} = \mathrm {e} ^ {- t a} \left( \begin{array}{c c} 1 + a t & a ^ {2} t \\ - t & 1 - a t \end{array} \right) \otimes \mathbf {I} _ {d}, \tag {19} +$$ + +and + +$$ +\begin{array}{l} \left\| \mathrm {e} ^ {t A} \right\| \leq \left\| \mathrm {e} ^ {t A} \right\| _ {1} ^ {1 / 2} \left\| \mathrm {e} ^ {t A} \right\| _ {\infty} ^ {1 / 2} \leq (1 + \max \{a + 1; a (a + 1) \} t) \mathrm {e} ^ {- t a} \\ \leq \left(1 + (a + 1) ^ {2} t\right) \mathrm {e} ^ {- t a}. \\ \end{array} +$$ + +Proof. The Jordan matrix decomposition of $A$ when $d = 1$ is given by, + +$$ +A _ {1} = \left( \begin{array}{c c} 0 & a ^ {2} \\ - 1 & - 2 a \end{array} \right) = \left( \begin{array}{c c} - a & - 1 \\ 1 & 0 \end{array} \right) \times \left( \begin{array}{c c} - a & 1 \\ 0 & - a \end{array} \right) \times \left( \begin{array}{c c} 0 & 1 \\ - 1 & - a \end{array} \right). +$$ + +We can use this decomposition to obtain a matrix factorization in any dimension. As for all $k \in \mathbb{N}$ , $A^k = (A_1^k \otimes \mathbf{I}_d)$ , + +$$ +\mathrm {e} ^ {t A} = \sum_ {k = 0} ^ {\infty} \frac {t ^ {k}}{k !} \left(A _ {1} ^ {k} \otimes \mathbf {I} _ {d}\right) = \left(\sum_ {k = 0} ^ {\infty} \frac {t ^ {k}}{k !} A _ {1} ^ {k}\right) \otimes \mathbf {I} _ {d} = \mathrm {e} ^ {t A _ {1}} \otimes \mathbf {I} _ {d}. +$$ + +Finally, we deduce an upper bound to the spectral norm of $\mathrm{e}^{tA}$ , as + +$$ +\left\| \mathrm {e} ^ {t A} \right\| _ {1} \leq \mathrm {e} ^ {- t a} \max \left\{(1 + (a + 1) t; 1 + a (a + 1) t \right\}, +$$ + +and + +$$ +\| \mathrm {e} ^ {t A} \| _ {\infty} \leq \mathrm {e} ^ {- t a} \max \left\{(1 + a (a + 1) t; 1 + (a + 1) t \right\}. +$$ + +Then, + +$$ +\| \mathrm {e} ^ {t A} \| \leq \| \mathrm {e} ^ {t A} \| _ {1} ^ {1 / 2} \| \mathrm {e} ^ {t A} \| _ {\infty} ^ {1 / 2} \leq \mathrm {e} ^ {- t a} (1 + \max \{a + 1; a (a + 1) \} t), +$$ + +which concludes the proof. + +Lemma A.2. Let $(\overrightarrow{\mathbf{U}}_t)_{t\in [0,T]}$ be a solution to the forward process (9) with initial condition + +$$ +\overrightarrow {\mathbf {U}} _ {0} \sim \pi_ {\mathrm {d a t a}} \otimes \pi_ {v}, +$$ + +where $\pi_v$ is a probability distribution on $(\mathbb{R}^d,\mathcal{B}(\mathbb{R}^d))$ . Then, the conditional law of $\vec{\mathbf{U}}_t$ given $\vec{\mathbf{U}}_0$ , is Gaussian with mean $\mu_{t|0}$ and covariance $\Sigma_{0,t}$ defined by + +$$ +\mu_ {t | 0} := \mathrm {e} ^ {t A} \vec {\mathbf {U}} _ {0}, \quad \Sigma_ {0, t} := \Sigma_ {\infty} - \mathrm {e} ^ {t A} \Sigma_ {\infty} \left(\mathrm {e} ^ {t A}\right) ^ {\top}, \tag {20} +$$ + +with + +$$ +\Sigma_ {\infty} := \frac {1}{4} \left( \begin{array}{c c} 5 \varepsilon^ {2} a ^ {- 1} + a \sigma^ {2} & - 2 \varepsilon^ {2} a ^ {- 2} \\ - 2 \varepsilon^ {2} a ^ {- 2} & (\varepsilon^ {2} + a ^ {2} \sigma^ {2}) a ^ {- 3} \end{array} \right) \otimes \mathbf {I} _ {d}. \tag {21} +$$ + +The result still holds when the forward process is defined as in (4) by setting $\varepsilon = 0$ . + +Proof. Recall that the forward process $(\overrightarrow{\mathbf{U}}_t)_{t\in [0,T]}$ is solution to, + +$$ +\mathrm {d} \vec {\mathbf {U}} _ {t} = A \vec {\mathbf {U}} _ {t} \mathrm {d} t + \Sigma_ {\varepsilon} \mathrm {d} B _ {t}. \tag {22} +$$ + +With initial condition $\overrightarrow{\mathbf{U}}_0\sim \pi_{\mathrm{data}}\otimes \pi_v$ , we have + +$$ +\overrightarrow {\mathbf {U}} _ {t} = \mathrm {e} ^ {t A} \overrightarrow {\mathbf {U}} _ {0} + \int_ {0} ^ {t} \mathrm {e} ^ {(t - s) A} \Sigma_ {\varepsilon} \mathrm {d} B _ {s}. +$$ + +This means that the law of $\overrightarrow{\mathbf{U}}_t$ , conditional to the initial condition $\overrightarrow{\mathbf{U}}_0$ is Gaussian with mean + +$$ +\mu_ {t | 0} := \mathbb {E} \left[ \overrightarrow {\mathbf {U}} _ {t} \right] = \mathrm {e} ^ {t A} \overrightarrow {\mathbf {U}} _ {0}, +$$ + +and covariance + +$$ +\begin{array}{l} \Sigma_ {0, t} := \operatorname {C o v} \left(\overrightarrow {\mathbf {U}} _ {t}\right) = \int_ {0} ^ {t} \mathrm {e} ^ {(t - s) A} \Sigma_ {\varepsilon} ^ {2} \left(\mathrm {e} ^ {(t - s) A}\right) ^ {\top} \mathrm {d} s \\ = \int_ {0} ^ {t} \mathrm {e} ^ {(t - s) A} \Sigma_ {\varepsilon} ^ {2} \left(\mathrm {e} ^ {(t - s) A}\right) ^ {\top} \mathrm {d} s \\ = \left(\int_ {0} ^ {t} \mathrm {e} ^ {(t - s) A _ {1}} \Sigma_ {\varepsilon} ^ {2} \left(\mathrm {e} ^ {(t - s) A _ {1}}\right) ^ {\top} \mathrm {d} s\right) \otimes \mathbf {I} _ {d}. \\ \end{array} +$$ + +Using Lemma A.1, for $\delta \geq 0$ + +$$ +\mathrm {e} ^ {\delta A _ {1}} \Sigma_ {\varepsilon} ^ {2} \mathrm {e} ^ {\delta A _ {1} ^ {\top}} = \mathrm {e} ^ {- 2 a \delta} \left( \begin{array}{c c} a ^ {4} \sigma^ {2} \delta^ {2} + \varepsilon^ {2} (1 + a \delta) ^ {2} & \delta \left(a ^ {2} \sigma^ {2} \left(1 - a \delta\right) - \varepsilon^ {2} (1 + a \delta)\right) \\ \delta \left(a ^ {2} \sigma^ {2} \left(1 - a \delta\right) - \varepsilon^ {2} (1 + a \delta)\right) & \sigma^ {2} (1 - a \delta) ^ {2} + \delta^ {2} \varepsilon^ {2} \end{array} \right). +$$ + +Hence, a straightforward computation provides with $\alpha_{t} = -(5 + 2at(3 + at))\varepsilon^{2} - a^{2}(1 + 2at(1 + at))\sigma^{2})a^{-1}$ and $\gamma_{t} = 2((\varepsilon + at\varepsilon)^{2} + a^{4}t^{2}\sigma^{2})a^{-2}$ , + +$$ +\begin{array}{l} \Sigma_ {0, t} = \frac {1}{4} \left( \begin{array}{c c} 5 \varepsilon^ {2} a ^ {- 1} + a \sigma^ {2} & - 2 \varepsilon^ {2} a ^ {- 2} \\ - 2 \varepsilon^ {2} a ^ {- 2} & (\varepsilon^ {2} + a ^ {2} \sigma^ {2}) a ^ {- 3} \end{array} \right) \\ + \frac {\mathrm {e} ^ {- 2 a t}}{4} \left( \begin{array}{l l} \alpha_ {t} & \gamma_ {t} \\ \gamma_ {t} & (- (1 + 2 a t (1 + a t)) \varepsilon^ {2} - a ^ {2} (1 + 2 a t (- 1 + a t)) \sigma^ {2}) a ^ {- 3} \end{array} \right) \tag {23} \\ = \Sigma_ {\infty} - \mathrm {e} ^ {t A} \Sigma_ {\infty} (\mathrm {e} ^ {t A}) ^ {\top}, \\ \end{array} +$$ + +where we used that, + +$$ +\mathrm {e} ^ {t A} \Sigma_ {\infty} \left(\mathrm {e} ^ {t A}\right) ^ {\top} = \int_ {0} ^ {\infty} \mathrm {e} ^ {(t + s) A} \Sigma^ {2} \left(\mathrm {e} ^ {(t + s) A}\right) ^ {\top} \mathrm {d} s = \int_ {t} ^ {\infty} \mathrm {e} ^ {\delta A} \Sigma^ {2} \left(\mathrm {e} ^ {\delta A}\right) ^ {\top} \mathrm {d} \delta = \Sigma_ {\infty} - \Sigma_ {0, t}. +$$ + +Lemma A.3. The covariance matrix $\Sigma_{0,t}$ defined in (20) satisfies, for all $\varepsilon > 0$ , + +$$ +\begin{array}{l} \lambda_ {\min } \left(\Sigma_ {0, t}\right) \geq \max \left\{\frac {\sigma^ {2}}{4} \min \{a, 1 / a \} - \left(\frac {\sigma^ {2}}{4} \max \{a, 1 / a \} + \frac {5 \varepsilon^ {2}}{4 a}\right) e ^ {- 2 a t}, \right. \\ \left. \min \{\varepsilon^ {2}, \sigma^ {2} \} \frac {1 - \mathrm {e} ^ {- 2 a t}}{2 a (1 + (a + 1) ^ {2} t) ^ {2}} \right\}, \\ \end{array} +$$ + +$$ +\lambda_ {\max } (\Sigma_ {0, t}) \leq \frac {\sigma^ {2}}{4} \max \{a, 1 / a \} + \frac {5 \varepsilon^ {2}}{4 a}. +$$ + +Proof. First, consider the following decomposition of $\Sigma_{\infty}$ defined in (21) + +$$ +\Sigma_ {\infty} = \frac {1}{4} \left( \begin{array}{c c} a \sigma^ {2} & 0 \\ 0 & \sigma^ {2} a ^ {- 1} \end{array} \right) + \frac {\varepsilon^ {2}}{4 a ^ {3}} \left( \begin{array}{c c} 5 a ^ {2} & - 2 a \\ - 2 a & 1 \end{array} \right) =: \frac {1}{4} \left( \begin{array}{c c} a \sigma^ {2} & 0 \\ 0 & \sigma^ {2} a ^ {- 1} \end{array} \right) + E _ {\varepsilon}. +$$ + +Since $E_{\varepsilon}$ is positive definite, its trace and determinant are positive, then + +$$ +\begin{array}{l} \lambda_ {\min } (\Sigma_ {\infty}) \geq \frac {1}{4} \lambda_ {\min } \left(\left( \begin{array}{c c} a \sigma^ {2} & 0 \\ 0 & \sigma^ {2} a ^ {- 1} \end{array} \right)\right) = \frac {\sigma^ {2}}{4} \min \{a, 1 / a \}, \\ \lambda_ {\max } \left(\Sigma_ {\infty}\right) \leq \frac {1}{4} \lambda_ {\max } \left(\left( \begin{array}{c c} a \sigma^ {2} & 0 \\ 0 & \sigma^ {2} a ^ {- 1} \end{array} \right)\right) + \lambda_ {\max } \left(E _ {\varepsilon}\right) \leq \frac {\sigma^ {2}}{4} \max \left\{a, 1 / a \right\} + \frac {5 \varepsilon^ {2}}{4 a}. \tag {24} \\ \end{array} +$$ + +Using that $\Sigma_{0,t} = \Sigma_{\infty} - \mathrm{e}^{tA}\Sigma_{\infty}\mathrm{e}^{tA^{\top}}$ together with Weyl's inequality we have that + +$$ +\lambda_ {\min } \left(\Sigma_ {0, t}\right) \geq \lambda_ {\min } \left(\Sigma_ {\infty}\right) - \lambda_ {\max } \left(\mathrm {e} ^ {t A} \Sigma_ {\infty} \mathrm {e} ^ {t A ^ {\top}}\right). +$$ + +Note that, as $\Sigma_{\infty}$ is positive semidefinite, + +$$ +\lambda_ {\max } \left(\mathrm {e} ^ {t A} \Sigma_ {\infty} \mathrm {e} ^ {t A ^ {\top}}\right) = \lambda_ {\max } \left(\mathrm {e} ^ {t A} \Sigma_ {\infty} ^ {1 / 2}\right) ^ {2} \leq \lambda_ {\max } \left(\mathrm {e} ^ {t A}\right) ^ {2} \lambda_ {\max } \left(\Sigma_ {\infty}\right) \leq \mathrm {e} ^ {- 2 a t} \lambda_ {\max } \left(\Sigma_ {\infty}\right). +$$ + +On the other hand, using that $\Sigma_{0,t} = \int_0^t\mathrm{e}^{sA}\Sigma_\varepsilon^2\mathrm{e}^{sA^\top}\mathrm{d}s$ yields + +$$ +\Sigma_ {0, t} \succcurlyeq \min \left\{\varepsilon^ {2}, \sigma^ {2} \right\} \int_ {0} ^ {t} \mathrm {e} ^ {s A} \mathrm {e} ^ {s A ^ {\top}} \mathrm {d} s, +$$ + +therefore, + +$$ +\begin{array}{l} \lambda_ {\min } \left(\Sigma_ {0, t}\right) \geq \min \left\{\varepsilon^ {2}, \sigma^ {2} \right\} \int_ {0} ^ {t} \lambda_ {\min } \left(\mathrm {e} ^ {s A} \mathrm {e} ^ {s A ^ {\top}}\right) \mathrm {d} s \\ \geq \min \left\{\varepsilon^ {2}, \sigma^ {2} \right\} \int_ {0} ^ {t} \frac {\mathrm {e} ^ {- 2 a s}}{(1 + (a + 1) ^ {2} s) ^ {2}} \mathrm {d} s, \\ \geq \min \left\{\varepsilon^ {2}, \sigma^ {2} \right\} \frac {1 - \mathrm {e} ^ {- 2 a t}}{2 a (1 + (a + 1) ^ {2} t) ^ {2}}, \\ \end{array} +$$ + +which gives the other lower bound of $\lambda_{\mathrm{min}}(\Sigma_{0,t})$ + +To obtain the bound on $\lambda_{\max}(\Sigma_{0,t})$ , it is enough to note that $\Sigma_{0,t}\preccurlyeq \Sigma_{\infty}$ + +Lemma A.4 (Forward process $\mathcal{W}_2$ -contraction). The forward process, defined as in (9), is contractive for the $\mathcal{W}_2$ distance. In particular, it holds that + +$$ +\mathcal {W} _ {2} \left(\mathcal {L} (\overrightarrow {\mathbf {U}} _ {T}), \pi_ {\infty}\right) \leq K _ {T} e ^ {- a T} \mathcal {W} _ {2} \left(\pi_ {\mathrm {d a t a}} \otimes \pi_ {v}, \pi_ {\infty}\right), +$$ + +where $\pi_{\infty}$ is the stationary distribution of (9) as defined in Lemma A.2 and + +$$ +K _ {T} := (1 + \max \{a + 1; a (a + 1) \} T). +$$ + +Proof. Let $u = (x, v) \in \mathbb{R}^{2d}$ (resp. $\bar{u} = (\bar{x}, \bar{v}) \in \mathbb{R}^{2d}$ ) and denote by $(\vec{\mathbf{U}}_t^u)_{t \in [0,T]}$ (resp. $(\vec{\mathbf{U}}_t^{\bar{u}})_{t \in [0,T]}$ ) the solution of (9), with initial condition $\vec{\mathbf{U}}_0^u = u$ (resp. $\vec{\mathbf{U}}_0^{\bar{u}} = \bar{u}$ ). By Ito's lemma, + +$$ +\mathrm {d} \left(e ^ {- t A} \overrightarrow {\mathbf {U}} ^ {x, v}\right) = e ^ {- t A} \Sigma_ {\varepsilon} \mathrm {d} B _ {t}. +$$ + +Using a synchronous coupling for $(\vec{\mathbf{U}}_t^u)_{t\in [0,T]}$ and $(\vec{\mathbf{U}}_t^{\bar{u}})_{t\in [0,T]}$ , we have that + +$$ +\overrightarrow {\mathbf {U}} _ {t} ^ {u} - \overrightarrow {\mathbf {U}} _ {t} ^ {\bar {u}} = \mathrm {e} ^ {t A} (u - \bar {u}). +$$ + +By definition of the Wasserstein-2 distance $\mathcal{W}_2(\mathcal{L}(\vec{\mathbf{U}}_t^u),\mathcal{L}(\vec{\mathbf{U}}_t^{\bar{u}}))\leq \| \vec{\mathbf{U}}_t^u -\vec{\mathbf{U}}_t^{\bar{u}}\|_{L_2}$ . Then, by Lemma A.1, + +$$ +\left\| \overrightarrow {\mathbf {U}} _ {t} ^ {u} - \overrightarrow {\mathbf {U}} _ {t} ^ {\bar {u}} \right\| _ {L _ {2}} \leq \left\| \mathrm {e} ^ {t A} \right\| \| u - \bar {u} \| _ {L _ {2}} \leq K _ {t} \mathrm {e} ^ {- t a} \| u - \bar {u} \| _ {L _ {2}}, \tag {25} +$$ + +with + +$$ +K _ {t} := (1 + \max \{a + 1; a (a + 1) \} t). +$$ + +Finally, assume that $\bar{u} \sim \pi_{\infty}$ , $u \sim \pi_{\mathrm{data}} \otimes \pi_v$ and fix any coupling $\gamma \in \Pi(\pi_{\mathrm{data}} \otimes \pi_v, \pi_{\infty})$ . Using that $\pi_{\infty}$ is stationary distribution of $\vec{\mathbf{U}}_t$ and taking the infimum over $\gamma \in \Pi(\pi_{\mathrm{data}} \otimes \pi_v, \pi_{\infty})$ yields, + +$$ +\mathcal {W} _ {2} \left(\mathcal {L} \left(\overrightarrow {\mathbf {U}} _ {T}\right), \pi_ {\infty}\right) \leq K _ {T} e ^ {- a T} \mathcal {W} _ {2} \left(\pi_ {\text {d a t a}} \otimes \pi_ {v}, \pi_ {\infty}\right), +$$ + +which finishes the proof. + +![](images/8c8f6d00f4d46d3461a262e24709e41f2ec9e928cab004e33cc97b3a2fad98fd.jpg) + +# B Proof of Theorem 3.1 + +In this section we prove Theorem 3.1. We use notations from (12) (resp. (13)) for the continuous time interpolation of the discretized backward with modified score function $\bar{\mathbf{U}}_t$ (resp. for the continuous time interpolation of the discretized backward with approximated modified score function $\bar{\mathbf{U}}_t^\theta$ ). We first establish the propagation of Lipschitz regularity, followed by the proof of Theorem 3.1. To do so, we decompose the generation error as the sum of the discretization error (Lemma B.2), the approximation error (Lemma B.3), and the mixing time error (Lemma B.4). + +# B.1 Propagation of the regularity assumptions + +Proposition B.1. Assume that Assumption H2 holds. Then, for all $t > 0$ , $\Sigma_{\varepsilon}^{2}\nabla \log p_{t}$ (resp. $\Sigma_{\varepsilon}^{2}\nabla \log \tilde{p}_{t}$ ) is $L_{t}$ -Lipschitz (resp. $\tilde{L}_t$ -Lipschitz): for all $u\in \mathbb{R}^{2d}$ , + +$$ +\left\| \Sigma_ {\varepsilon} ^ {2} \nabla^ {2} \log p _ {t} (u) \right\| \leq L _ {t}. +$$ + +Moreover, there exists a constant $C > 0$ such that for all $u\in \mathbb{R}^{2d}$ + +$$ +\left\| \Sigma_ {\varepsilon} ^ {2} \nabla^ {2} \log \tilde {p} _ {t} (u) \right\| \leq \tilde {L} _ {t} \leq C \left(1 + \frac {1}{\sqrt {t}}\right) \mathrm {e} ^ {- 2 a t}. \tag {26} +$$ + +Proof. Step 1: Lower bound on $\nabla^2\log p_t$ . Recall the following equality in law given by the modified kinetic OU process (9) + +$$ +\overrightarrow {\mathbf {U}} _ {t} \stackrel {\mathcal {L}} {=} \mathrm {e} ^ {t A} \overrightarrow {\mathbf {U}} _ {0} + \sqrt {\Sigma_ {0 , t}} G, +$$ + +with $\overrightarrow{\mathbf{U}}_0\sim \pi_{\mathrm{data}}\otimes \pi_v,G\sim \mathcal{N}(0,\mathbf{I}_{2d})$ , where $G$ and $\overrightarrow{\mathbf{U}}_0$ are independent, and $\Sigma_{0,t}$ is defined in (20). Writing $q_{t|0}$ the conditional density of $\overrightarrow{\mathbf{U}}_t$ given $\overrightarrow{\mathbf{U}}_0$ , we have + +$$ +\begin{array}{l} p _ {t} (u _ {t}) = \int_ {\mathbb {R} ^ {2 d}} p _ {0} (u _ {0}) q _ {t | 0} (u _ {t} | u _ {0}) \mathrm {d} u _ {0} \\ = \int_ {\mathbb {R} ^ {2 d}} p _ {0} (u _ {0}) \det (2 \pi \Sigma_ {0, t}) ^ {- 1 / 2} \exp \left(- \frac {1}{2} (u _ {t} - \mathrm {e} ^ {t A} u _ {0}) ^ {\top} \Sigma_ {0, t} ^ {- 1} (u _ {t} - \mathrm {e} ^ {t A} u _ {0})\right) \mathrm {d} u _ {0} \\ = \det \left(\mathrm {e} ^ {- t A}\right) \int_ {\mathbb {R} ^ {2 d}} p _ {0} \left(\mathrm {e} ^ {- t A} z\right) \det \left(2 \pi \Sigma_ {0, t}\right) ^ {- 1 / 2} \exp \left(- \frac {1}{2} \left(u _ {t} - z\right) ^ {\top} \Sigma_ {0, t} ^ {- 1} \left(u _ {t} - z\right)\right) \mathrm {d} z. \\ \end{array} +$$ + +As also observed in Saumard and Wellner (Proposition 7.1, 2014), we get + +$$ +\begin{array}{l} \nabla^ {2} \log p _ {t} (u) = \operatorname {V a r} \left(\nabla \phi_ {0, t} \left(Y _ {0}\right) \mid Y _ {0} + Y _ {1} = u\right) - \mathbb {E} \left[ \nabla^ {2} \phi_ {0, t} \left(Y _ {0}\right) \mid Y _ {0} + Y _ {1} = u \right] \tag {27} \\ = \operatorname {V a r} \left(\nabla \phi_ {1, t} \left(Y _ {1}\right) \mid Y _ {0} + Y _ {1} = u\right) - \mathbb {E} \left[ \nabla^ {2} \phi_ {1, t} \left(Y _ {1}\right) \mid Y _ {0} + Y _ {1} = u \right], \\ \end{array} +$$ + +for $Y_0 = \mathrm{e}^{tA}\overrightarrow{\mathbf{U}}_0$ and $Y_{1} = \sqrt{\Sigma_{0,t}} G$ and for $\phi_{0,t}$ and $\phi_{1,t}$ such that for all $u\in \mathbb{R}^{2d}$ , + +$$ +\mathrm {e} ^ {- \phi_ {0, t} (u)} := \det \left(\mathrm {e} ^ {- t A}\right) p _ {0} \left(\mathrm {e} ^ {- t A} u\right), +$$ + +$$ +\mathrm {e} ^ {- \phi_ {1, t} (u)} := \det \left(2 \pi \Sigma_ {0, t}\right) ^ {- 1 / 2} \exp \left(- \frac {1}{2} u ^ {\top} \Sigma_ {0, t} ^ {- 1} u\right). +$$ + +This implies that + +$$ +\nabla^ {2} \log p _ {t} (u) \succcurlyeq - \mathbb {E} [ \nabla^ {2} \phi_ {0, t} (Y _ {0}) | Y _ {0} + Y _ {1} = u ], \tag {28} +$$ + +$$ +\nabla^ {2} \log p _ {t} (u) \succcurlyeq - \mathbb {E} \left[ \nabla^ {2} \phi_ {1, t} (Y _ {1}) | Y _ {0} + Y _ {1} = u \right]. +$$ + +Note that for all $u\in \mathbb{R}^{2d}$ + +$$ +\begin{array}{l} \nabla^ {2} \phi_ {0, t} (u) = - \mathrm {e} ^ {- t A ^ {\top}} \nabla^ {2} \log p _ {0} \left(\mathrm {e} ^ {- t A} u\right) \mathrm {e} ^ {- t A}, \\ \nabla^ {2} \phi_ {1, t} (u) = \Sigma_ {0, t} ^ {- 1}. \\ \end{array} +$$ + +From (Bouchut et al., 2005, Lemma 2.2) together with (14), we get that the one-sided Lipschitz assumption entails the following inequality over the Hessian of the log-density, since $\log p_0(u) = \log \pi_{\mathrm{data}}(x) + \log p_v(v)$ , + +$$ +\begin{array}{l} \nabla^ {2} \left(- \log p _ {0}\right) (u) = \left( \begin{array}{c c} - \nabla^ {2} \log p _ {\mathrm {d a t a}} (x) & 0 \\ 0 & - \nabla^ {2} \log p _ {v} (v) \end{array} \right) \\ = \left( \begin{array}{c c} - \nabla^ {2} \log p _ {\mathrm {d a t a}} (x) & 0 \\ 0 & v ^ {- 2} \mathbf {I} _ {d} \end{array} \right) \preccurlyeq \max \left\{L _ {0}, \frac {1}{v ^ {2}} \right\} \mathbf {I} _ {2 d}. \\ \end{array} +$$ + +Therefore, for $t > 0$ , from (28), we get + +$$ +\nabla^ {2} \log p _ {t} (u) \succcurlyeq - \mathfrak {h} _ {t} \mathbf {I} _ {2 d}, +$$ + +where $\mathfrak{h}_t = \min \left\{\| \mathrm{e}^{-tA}\|^2\max \left\{L_0,v^{-2}\right\} ;\| \Sigma_{0,t}^{-1}\| \right\}$ + +Bound on $\mathfrak{h}_t$ . By Lemma A.1, we have that $\left\| \mathrm{e}^{-tA}\right\|^2 \leq \left(1 + (a + 1)^2 t\right)^2 \mathrm{e}^{2ta}$ . Moreover, from Lemma A.3, it follows that + +$$ +\left\| \Sigma_ {0, t} ^ {- 1} \right\| = \frac {1}{\lambda_ {\min} (\Sigma_ {0 , t})} \leq \frac {1}{\left\lfloor \lambda_ {\min} (\Sigma_ {\infty}) - \lambda_ {\max} (\mathrm {e} ^ {t A} \Sigma_ {\infty} \mathrm {e} ^ {t A ^ {\top}}) \right\rfloor_ {+}}, +$$ + +with $\lfloor \cdot \rfloor_{+}$ denoting the positive part of a real number. Therefore, + +$$ +\left\| \Sigma_ {0, t} ^ {- 1} \right\| \leq \frac {4}{\left\lfloor \sigma^ {2} \min \{a , 1 / a \} - \left(\sigma^ {2} \max \{a , 1 / a \} + 5 \varepsilon^ {2} a ^ {- 1}\right) e ^ {- 2 a t} \right\rfloor_ {+}} =: \mathfrak {h} _ {2, t}. +$$ + +Combining the two previous bounds, we obtain + +$$ +\mathfrak {h} _ {t} \leq \min \left\{\mathfrak {h} _ {1, t}; \mathfrak {h} _ {2, t} \right\}, \tag {29} +$$ + +where $\mathfrak{h}_{1,t} := \left(1 + (a + 1)^2 t\right)^2 \mathrm{e}^{2ta} \max \left\{L_0, v^{-2}\right\}$ . + +Step 2: Upper bound on $\nabla^2\log p_t$ . We first express the conditional density of $\vec{\mathbf{U}}_0$ given $\vec{\mathbf{U}}_t$ as follows + +$$ +\left. q _ {t \mid 0} \left(\left(x _ {0}, v _ {0}\right) ^ {\top} \mid u _ {t}\right) \propto \left(\mathrm {e} ^ {- V \left(x _ {0}\right) - H \left(x _ {0}\right)} \otimes \mathcal {N} \left(v _ {0}; 0 _ {d}, v ^ {2} \mathbf {I} _ {d}\right)\right) \mathcal {N} \left(u _ {t}; e ^ {t A} \left(x _ {0}, v _ {0}\right) ^ {\top}, \Sigma_ {0, t}\right). \right. \tag {30} +$$ + +First, we consider the log-concave part of the above distribution, + +$$ +\nu_ {t} \propto \left(\mathrm {e} ^ {- V \left(x _ {0}\right)} \otimes \mathcal {N} \left(v _ {0}; 0 _ {d}, v ^ {2} \mathbf {I} _ {d}\right)\right) \mathcal {N} \left(u _ {t}; e ^ {t A} \left(x _ {0}, v _ {0}\right) ^ {\top}, \Sigma_ {0, t}\right). \tag {31} +$$ + +Since $\nabla^2 V(x) \succcurlyeq \alpha \mathbf{I}_d$ for all $x \in \mathbb{R}^d$ , we obtain + +$$ +\nabla^ {2} \left(- \log \nu_ {t}\right) \succcurlyeq \mathrm {e} ^ {- t A} \left( \begin{array}{c c} \alpha \mathbf {I} _ {d} & 0 \\ 0 & \frac {1}{v ^ {2}} \mathbf {I} _ {d} \end{array} \right) \mathrm {e} ^ {- t A ^ {\top}} + \Sigma_ {0, t} ^ {- 1}. +$$ + +Therefore, by Brascamp-Lieb inequality (Brascamp and Lieb, 1976), + +$$ +\operatorname {C o v} (\nu_ {t}) \preccurlyeq \left(\mathrm {e} ^ {- t A} \left( \begin{array}{c c} \alpha \mathbf {I} _ {d} & 0 \\ 0 & \frac {1}{v ^ {2}} \mathbf {I} _ {d} \end{array} \right) \mathrm {e} ^ {- t A ^ {\top}} + \Sigma_ {0, t} ^ {- 1}\right) ^ {- 1}. +$$ + +Using the identity $\Sigma_{0,t} = \Sigma_{\infty} - e^{tA}\Sigma_{\infty}e^{tA^T}$ given in Lemma A.2, we now expand $\Sigma_{0,t}$ at zero as + +$$ +\Sigma_ {0, t} = t \left( \begin{array}{c c} \varepsilon^ {2} & 0 \\ 0 & \sigma^ {2} \end{array} \right) + \mathcal {O} (t ^ {2}) , +$$ + +which implies that + +$$ +\Sigma_ {0, t} ^ {- 1} = \frac {1}{t} \underbrace {\left( \begin{array}{c c} 1 / \varepsilon^ {2} & 0 \\ 0 & 1 / \sigma^ {2} \end{array} \right)} _ {= \Sigma_ {\varepsilon} ^ {- 1}} + o \big (\frac {1}{t} \big) . +$$ + +Therefore, the covariance matrix near zero satisfies + +$$ +\operatorname {C o v} (\nu_ {t}) \preccurlyeq \left( \begin{array}{c c} \left(\alpha + \frac {1}{\varepsilon^ {2} t}\right) ^ {- 1} & 0 \\ 0 & \left(\frac {1}{v ^ {2}} + \frac {1}{\sigma^ {2} t}\right) ^ {- 1} \end{array} \right) + o (t). +$$ + +Next, the Lipschitz perturbation term, following Brigati and Pedrotti (2025), can be bounded as + +$$ +\operatorname {C o v} (q _ {t} (. | u _ {t})) \preccurlyeq \underbrace {\left( \begin{array}{c c} \left(\frac {L}{\alpha + (\epsilon^ {2} t) ^ {- 1}} + \sqrt {\frac {1}{\alpha + (\epsilon^ {2} t) ^ {- 1}}}\right) ^ {2} & 0 \\ 0 & \left(\frac {1}{v ^ {2}} + \frac {1}{\sigma^ {2} t}\right) ^ {- 1} \end{array} \right)} _ {:= M _ {\varepsilon , t}} + o (t) . +$$ + +Using (27), we have + +$$ +\nabla^ {2} \log p _ {t} (u) = \Sigma_ {0, t} ^ {- 1} \operatorname {C o v} \left(q _ {t} (. | u)\right) \Sigma_ {0, t} ^ {- 1} - \Sigma_ {0, t} ^ {- 1}, \tag {32} +$$ + +so that + +$$ +\begin{array}{l} \nabla^ {2} \log p _ {t} (u) = \left(\frac {1}{t} \Sigma_ {\varepsilon} ^ {- 1} + o \left(\frac {1}{t}\right)\right) \left(M _ {t} + o (t)\right) \left(\frac {1}{t} \Sigma_ {\varepsilon} ^ {- 1} + o \left(\frac {1}{t}\right)\right) - \left(\frac {1}{t} \Sigma_ {\varepsilon} ^ {- 1} + o \left(\frac {1}{t}\right)\right) \\ = \left( \begin{array}{c c} \alpha_ {t} & 0 \\ 0 & \beta_ {t} \end{array} \right) + o \left(\frac {1}{t}\right), \\ \end{array} +$$ + +with + +$$ +\left| \alpha_ {t} \right| \leq \frac {L ^ {2}}{\left(\alpha \epsilon^ {2} t + 1\right) ^ {2}} + \frac {2 L}{\left(\epsilon^ {2} t\right) ^ {1 / 2} \left(\alpha \epsilon^ {2} t + 1\right) ^ {3 / 2}} - \frac {\alpha}{\alpha \epsilon^ {2} t + 1}, \quad \beta_ {t} := - \frac {1}{\sigma^ {2} t + v ^ {2}}. +$$ + +Consequently, for all $\epsilon > 0$ , as $t \to 0^+$ , + +$$ +\left( \begin{array}{c c} \frac {2 L}{\epsilon \sqrt {t}} & 0 \\ 0 & \frac {1}{v ^ {2}} \end{array} \right) + o \left(\frac {1}{\sqrt {t}}\right) \leq \nabla^ {2} \log p _ {t} (u) \leq \left( \begin{array}{c c} \frac {2 L}{\epsilon \sqrt {t}} & 0 \\ 0 & - \frac {1}{v ^ {2}} \end{array} \right) + o \left(\frac {1}{\sqrt {t}}\right). \tag {33} +$$ + +Step 3: Uniform bound on $\nabla^2\log p_t$ . We now analyze the structure of the minimum in the upper bound of $\mathfrak{h}_t$ in (29). We observe that the first term is increasing, equals $\max \left\{L_0,v^{-2}\right\}$ for $t\to 0$ and diverges as $t\to +\infty$ . In contrast, the second term is decreasing: it diverges as $t\to 0$ and converges to $4 / (\sigma^{2}\min \{a,1 / a\})$ as $t\to +\infty$ . Therefore, the minimum coincides with the first term, for $t\leq T_{\mathrm{change}}$ , and with the second term, for $t > T_{\mathrm{change}}$ . Using (33), we obtain the following uniform bound, for all $\epsilon >0$ + +$$ +\left\| \nabla^ {2} \log p _ {t} (u) \right\| \leq \max \left\{\mathfrak {h} _ {1, T _ {\text {c h a n g e}}}; C t ^ {- 1 / 2} \right\}, \quad \text {f o r} t > 0. \tag {34} +$$ + +This bound is uniform in $\varepsilon > 0$ , therefore, for $\varepsilon \to 0$ , we have + +$$ +\left\| \Sigma_ {\epsilon} ^ {2} \nabla^ {2} \log p _ {t} (u) \right\| \leq \max \left\{\mathfrak {h} _ {1, T _ {\text {c h a n g e}}}; C t ^ {- 1 / 2} \right\}, \quad \text {f o r} t > 0. \tag {35} +$$ + +Since $\tilde{p}_t = p_t / p_\infty$ , and $p_\infty$ is the density of a centered Gaussian vector of variance $\Sigma_\infty$ , we have + +$$ +\nabla^ {2} \log \tilde {p} _ {t} (u) = \nabla^ {2} \log p _ {t} (u) + \Sigma_ {\infty} ^ {- 1}. \tag {36} +$$ + +Therefore, the same bound as in (35) holds for the modified score. + +Step 4: Exponential decay of the modified score. From (32), we have the following equality + +$$ +\nabla^ {2} \log \tilde {p} _ {t} (u) = \nabla^ {2} \log p _ {t} (u) + \Sigma_ {\infty} ^ {- 1} = \Sigma_ {0, t} ^ {- 1} \mathrm {C o v} (q _ {t} (. | u)) \Sigma_ {0, t} ^ {- 1} - \Sigma_ {0, t} ^ {- 1} + \Sigma_ {\infty} ^ {- 1}, +$$ + +where $q_{t}$ is defined in (30). By applying Lemma D.9, together with the decomposition (20) and the positivity of the covariance, we obtain + +$$ +\nabla^ {2} \log \tilde {p} _ {t} (u) \succcurlyeq - \Sigma_ {0, t} ^ {- 1} \Big (\mathrm {e} ^ {t A} \Sigma_ {\infty} \big (\mathrm {e} ^ {t A} \big) ^ {\top} \Big) \Sigma_ {\infty} ^ {- 1}. +$$ + +Since $\Sigma_{0,t} = \Sigma_{\infty} + \mathcal{O}(e^{-2at})$ as $t\to \infty$ , there exists a constant $C > 0$ such that + +$$ +\left\| \Sigma_ {0, t} ^ {- 1} \left(\mathrm {e} ^ {t A} \Sigma_ {\infty} \left(\mathrm {e} ^ {t A}\right) ^ {\top}\right) \Sigma_ {\infty} ^ {- 1} \right\| \leq C e ^ {- 2 a t} +$$ + +On the other hand, using the fact that $\Sigma_{0,t}^{-1}\succcurlyeq\Sigma_{\infty}^{-1}$ (see (20)), we get + +$$ +\nabla^ {2} \log \tilde {p} _ {t} (u) \preccurlyeq \Sigma_ {0, t} ^ {- 1} \mathrm {C o v} (q _ {t} (. | u)) \Sigma_ {0, t} ^ {- 1}. +$$ + +Following the same steps as in the derivation of the upper bound on $\nabla^2\log p_t$ (step 2), we obtain + +$$ +\nabla^ {2} \left(- \log \nu_ {t}\right) \succcurlyeq \mathrm {e} ^ {- t A} \left( \begin{array}{c c} \alpha \mathbf {I} _ {d} & 0 \\ 0 & \frac {1}{v ^ {2}} \mathbf {I} _ {d} \end{array} \right) \mathrm {e} ^ {- t A ^ {\top}} + \Sigma_ {0, t} ^ {- 1} \succcurlyeq \mathrm {e} ^ {- t A} \left( \begin{array}{c c} \alpha \mathbf {I} _ {d} & 0 \\ 0 & \frac {1}{v ^ {2}} \mathbf {I} _ {d} \end{array} \right) \mathrm {e} ^ {- t A ^ {\top}}, +$$ + +where $\nu_{t}$ is defined in (30). By the Brascamp-Lieb inequality, this implies that $\mathrm{Cov}(\nu_t) = \mathcal{O}(e^{-2at})$ . Next, similarly to "step 2", for the term involving the Lipschitz perturbation, and following Brigati and Pedrotti (2025, Theorem 1.3), we have + +$$ +\operatorname {C o v} \left(q _ {t} (.. | u)\right) \preccurlyeq \left(L C e ^ {- 2 a t} + \sqrt {C e ^ {- 2 a t}}\right) ^ {2} \mathbf {I} _ {d}. +$$ + +Therefore, there exist a universal constant $C > 0$ and a finite time $T_{\mathrm{change}} > 0$ such that, for all $t \geq T_{\mathrm{change}}$ , + +$$ +\left\| \nabla^ {2} \log \tilde {p} _ {t} (u) \right\| \leq C \mathrm {e} ^ {- 2 a t}. +$$ + +This implies that the modified score function is $\tilde{L}_t$ -Lipschitz, with $\tilde{L}_t$ defined as + +$$ +\tilde {L} _ {t} := \left\{ \begin{array}{l l} \max \left\{\mathfrak {h} _ {1, T _ {\text {c h a n g e}}}; C t ^ {- 1 / 2} \right\} + \max \{a, 1 / a \}, & \quad \text {f o r} t \in (0, T _ {\text {c h a n g e}} ], \\ C \mathrm {e} ^ {- 2 a t}, & \quad \text {f o r} t \in (T _ {\text {c h a n g e}}, + \infty), \end{array} \right. +$$ + +which concludes the proof. + +![](images/049dd53cba94cd1d55dfcdbea5602c57d6bf99fe25bcb12fa6598a0468ad9d21.jpg) + +# B.2 Proofs of the main results + +Lemma B.2 (Discretization error). Assume that $H1$ and $H2$ hold. Then, for all $\eta > 0$ and all $h > 0$ , there exists a constant $C > 0$ such that + +$$ +\mathcal {W} _ {2} \left(\mathcal {L} (\overline {{\mathbf {U}}} _ {T}), \mathcal {L} (\bar {\mathbf {U}} _ {T})\right) \leq \sqrt {h} C \sqrt {\left(h \| A \| ^ {2} B ^ {2} + \| \Sigma_ {\epsilon} \| ^ {2} \left(d + \mathcal {I} \left(\pi_ {\mathrm {d a t a}} \otimes \pi_ {v} \mid \pi_ {\infty}\right)\right)\right) \frac {\mathrm {e} ^ {C a ^ {- 1}}}{a - \eta}}, \tag {37} +$$ + +with + +$$ +B := \max _ {t \in [ 0, T ]} \left(1 + (a + 1) ^ {2} (T - t)\right) ^ {2} \mathrm {e} ^ {- 2 a (T - t)} \left\| \overrightarrow {\mathbf {U}} _ {0} \right\| _ {L _ {2}} ^ {2} + \frac {d}{2} \big (\sigma^ {2} \max \{a, 1 / a \} + \frac {5 \varepsilon^ {2}}{a} \big). +$$ + +Proof. Consider a synchronous coupling for $(\overline{\mathbf{U}}_t)_{t\in [0,T]}$ and $(\bar{\mathbf{U}}_t)_{t\in [0,T]}$ , i.e., use the same Brownian motion to drive the two processes, with the same initial point, i.e., $\overleftarrow{\mathbf{U}}_0 = \bar{\mathbf{U}}_0$ . Then, it holds that + +$$ +\mathcal {W} _ {2} \left(\mathcal {L} (\overleftarrow {\mathbf {U}} _ {T}), \mathcal {L} (\bar {\mathbf {U}} _ {T})\right) \leq \left\| \overleftarrow {\mathbf {U}} _ {T} - \bar {\mathbf {U}} _ {T} \right\| _ {L _ {2}}. +$$ + +Fix $0 < \Delta < h$ and let $t_N \coloneqq T - \Delta$ . Note that, for all $0 \leq k \leq N - 1$ , from (11) and (12), + +$$ +\begin{array}{l} \overleftarrow {\bar {\mathbf {U}}} _ {t _ {k + 1}} - \bar {\mathbf {U}} _ {t _ {k + 1}} \\ = \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} + \int_ {t _ {k}} ^ {t _ {k + 1}} \left\{\tilde {A} _ {\epsilon} \left(\overleftarrow {\mathbf {U}} _ {t} - \bar {\mathbf {U}} _ {t _ {k}}\right) + \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) - \tilde {\mathbf {s}} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right)\right) \right\} \mathrm {d} t, \\ \end{array} +$$ + +where + +$$ +\tilde {A} _ {\epsilon} = - A - \Sigma_ {\epsilon} ^ {2} \Sigma_ {\infty} ^ {- 1}. +$$ + +From Monmarché (2023, Lemma 5 and Proposition 4) and Achleitner et al. (2015, Lemma 2.6), there exists a symmetric positive definite matrix $\mathfrak{M} \in \mathbb{R}^{2d \times 2d}$ such that, for any fixed $\eta > 0$ , we have + +$$ +\mathfrak {M} \tilde {A} _ {\epsilon} \preccurlyeq - (a - \eta) \mathfrak {M}. \tag {38} +$$ + +We then prove contraction with respect to the norm associated with $\mathfrak{M}$ defined, for all $v\in \mathbb{R}^{2d}$ , by $\| v\|_{\mathfrak{M}}^2 := v^\top \mathfrak{M}v$ + +For $t\in [t_k,t_{k + 1})$ + +$$ +\mathrm {d} \left(\overleftarrow {\mathbf {U}} _ {t} - \bar {\mathbf {U}} _ {t}\right) = \tilde {A} _ {\epsilon} \left(\overleftarrow {\mathbf {U}} _ {t} - \bar {\mathbf {U}} _ {t _ {k}}\right) \mathrm {d} t + \Sigma_ {\epsilon} ^ {2} \left(\tilde {s} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) - \tilde {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right)\right) \mathrm {d} t. +$$ + +This means that we have + +$$ +\mathrm {d} \left(\left(\overleftarrow {\mathbf {U}} _ {t} - \bar {\mathbf {U}} _ {t}\right) ^ {\top} \mathfrak {M} \left(\overleftarrow {\mathbf {U}} _ {t} - \bar {\mathbf {U}} _ {t}\right)\right) = 2 \left(\overleftarrow {\mathbf {U}} _ {t} - \bar {\mathbf {U}} _ {t}\right) ^ {\top} \mathfrak {M} \mathrm {d} \left(\overleftarrow {\mathbf {U}} _ {t} - \bar {\mathbf {U}} _ {t}\right). +$$ + +It follows that, + +$$ +\begin{array}{l} \left\| \overset {\leftarrow} {\bar {\mathbf {U}}} _ {t _ {k + 1}} - \bar {\mathbf {U}} _ {t _ {k + 1}} \right\| _ {\mathfrak {M}} ^ {2} = \left\| \overset {\leftarrow} {\bar {\mathbf {U}}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {\mathfrak {M}} ^ {2} + 2 \int_ {t _ {k}} ^ {t _ {k + 1}} \left(\overset {\leftarrow} {\bar {\mathbf {U}}} _ {t} - \bar {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} \mathfrak {M} \tilde {A} _ {\epsilon} \left(\overset {\leftarrow} {\bar {\mathbf {U}}} _ {t} - \bar {\mathbf {U}} _ {t _ {k}}\right) \mathrm {d} t \\ + 2 \int_ {t _ {k}} ^ {t _ {k + 1}} \left(\overleftarrow {\mathbf {U}} _ {t} - \bar {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} \mathfrak {M} \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) - \tilde {\mathbf {s}} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right)\right) d t \\ \leq \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {\mathfrak {M}} ^ {2} + 2 \left(A _ {1, k} + A _ {2, k} + A _ {3, k} + A _ {4, k} + A _ {5, k} + A _ {6, k}\right), \\ \end{array} +$$ + +where + +$$ +\begin{array}{l} A _ {1, k} := h \left(\overline {{\mathbf {U}}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} \mathfrak {M} \tilde {A} _ {\epsilon} \left(\overline {{\mathbf {U}}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) \\ + h \left(\overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} \mathfrak {M} \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t _ {k}} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}}\right) - \tilde {\mathbf {s}} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right)\right), \\ \end{array} +$$ + +$$ +A _ {2, k} := \int_ {t _ {k}} ^ {t _ {k + 1}} \left(\overleftarrow {\mathbf {U}} _ {t} - \overleftarrow {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} \mathfrak {M} \left\{\tilde {A} _ {\epsilon} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) + \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t _ {k}} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}}\right) - \tilde {\mathbf {s}} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right)\right) \right\} d t, +$$ + +$$ +A _ {3, k} := \int_ {t _ {k}} ^ {t _ {k + 1}} \left(\overleftarrow {\mathbf {U}} _ {t} - \overleftarrow {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} \tilde {A} _ {\epsilon} ^ {\top} \mathfrak {M} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) \mathrm {d} t, +$$ + +$$ +A _ {4, k} := \left(\overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} \mathfrak {M} \Sigma_ {\epsilon} ^ {2} \int_ {t _ {k}} ^ {t _ {k + 1}} \left(\tilde {\mathfrak {s}} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) - \tilde {\mathfrak {s}} _ {T - t _ {k}} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}}\right)\right) \mathrm {d} t, +$$ + +$$ +A _ {5, k} := \int_ {t _ {k}} ^ {t _ {k + 1}} \left(\overleftarrow {\mathbf {U}} _ {t} - \overleftarrow {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} \mathfrak {M} \tilde {A} _ {\epsilon} \left(\overleftarrow {\mathbf {U}} _ {t} - \overleftarrow {\mathbf {U}} _ {t _ {k}}\right) \mathrm {d} t, +$$ + +$$ +A _ {6, k} := \int_ {t _ {k}} ^ {t _ {k + 1}} \left(\overleftarrow {\mathbf {U}} _ {t} - \overleftarrow {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} \mathfrak {M} \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) - \tilde {\mathbf {s}} _ {T - t _ {k}} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}}\right)\right) \mathrm {d} t. +$$ + +Next, we bound each term of the above decomposition separately. + +Bound of $\mathbb{E}[A_{1,k}]$ . By Assumption H2, applying Proposition B.1, there exists a constant $C$ (that depends on the eigenvalues of $\mathfrak{M}$ or constant terms and that may vary from line to line) such that + +$$ +\left(\overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} \mathfrak {M} \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t _ {k}} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}}\right) - \tilde {\mathbf {s}} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right)\right) \leq C \tilde {L} _ {T - t _ {k}} \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {\mathfrak {M}} ^ {2}, +$$ + +and using (38), + +$$ +\left(\overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} \mathfrak {M} \tilde {A} _ {\epsilon} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) \leq - (a - \eta) \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {\mathfrak {M}} ^ {2}. +$$ + +Combining this with (38) yields + +$$ +\mathbb {E} \left[ A _ {1, k} \right] \leq h \left(C \tilde {L} _ {T - t _ {k}} - (a - \eta)\right) \mathbb {E} \left[ \left\| \dot {\bar {\mathbf {U}}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {\mathfrak {M}} ^ {2} \right]. +$$ + +Bound of $\mathbb{E}[A_{2,k}]$ . Using the Cauchy-Schwarz inequality, + +$$ +\begin{array}{l} \mathbb {E} \left[ A _ {2, k} \right] \leq \mathbb {E} \left[ \left\| \int_ {t _ {k}} ^ {t _ {k + 1}} \left(\overleftarrow {\mathbf {U}} _ {t} - \overleftarrow {\mathbf {U}} _ {t _ {k}}\right) \mathrm {d} t \right\| ^ {2} \right] ^ {1 / 2} \\ \times \mathbb {E} \left[ \left\| \mathfrak {M} \left\{\tilde {A} _ {\epsilon} \left(\overline {{\mathbf {U}}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) + \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t _ {k}} \left(\overline {{\mathbf {U}}} _ {t _ {k}}\right) - \tilde {\mathbf {s}} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right)\right) \right\} \right\| ^ {2} \right] ^ {1 / 2} \\ \leq C \mathbb {E} \left[ \left\| \int_ {t _ {k}} ^ {t _ {k + 1}} \left(\overleftarrow {\mathbf {U}} _ {t} - \overleftarrow {\mathbf {U}} _ {t _ {k}}\right) \mathrm {d} t \right\| ^ {2} \right] ^ {1 / 2} \mathbb {E} \left[ \left\| \sqrt {\mathfrak {M}} \left(\tilde {b} _ {t _ {k}} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}}\right) - \tilde {b} _ {t _ {k}} \left(\overline {{\mathbf {U}}} _ {t _ {k}}\right)\right) \right\| ^ {2} \right] ^ {1 / 2}, \\ \end{array} +$$ + +with $\tilde{b}_t$ the backward drift in (11) defined by $\tilde{b}_t: u \mapsto \tilde{A}_\epsilon u + \Sigma_\epsilon^2 \tilde{s}_{T-t}(u)$ . On the one hand, the Cauchy-Schwarz inequality implies + +$$ +\mathbb {E} \left[ \left\| \int_ {t _ {k}} ^ {t _ {k + 1}} \left(\overleftarrow {\mathbf {U}} _ {t} - \overleftarrow {\mathbf {U}} _ {t _ {k}}\right) \mathrm {d} t \right\| ^ {2} \right] ^ {1 / 2} \leq \sqrt {h} \left(\int_ {t _ {k}} ^ {t _ {k + 1}} \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {t} - \overleftarrow {\mathbf {U}} _ {t _ {k}} \right\| ^ {2} \right] \mathrm {d} t\right) ^ {1 / 2}. +$$ + +Using the time-reversal property, Lemma D.3, together with Cauchy-Schwarz inequality and Itô's isometry, + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {t} - \overleftarrow {\mathbf {U}} _ {t _ {k}} \right\| ^ {2} \right] \leq \mathbb {E} \left[ \left\| \int_ {T - t} ^ {T - t _ {k}} A \overrightarrow {\mathbf {U}} _ {s} \mathrm {d} s + \Sigma_ {\epsilon} \mathrm {d} B _ {s} \right\| ^ {2} \right] \\ \leq C \left(h \| A \| ^ {2} \int_ {T - t} ^ {T - t _ {k}} \mathbb {E} \left[ \left\| \overrightarrow {\mathbf {U}} _ {s} \right\| ^ {2} \right] \mathrm {d} s + h d \| \Sigma_ {\epsilon} \| ^ {2}\right) \tag {39} \\ \leq C \left(h ^ {2} \| A \| ^ {2} B ^ {2} + h d \| \Sigma_ {\epsilon} \| ^ {2}\right), \\ \end{array} +$$ + +where $B$ is defined in (56). It follows that + +$$ +\mathbb {E} \left[ \left\| \int_ {t _ {k}} ^ {t _ {k + 1}} \left(\overleftarrow {\mathbf {U}} _ {t} - \overleftarrow {\mathbf {U}} _ {t _ {k}}\right) \mathrm {d} t \right\| ^ {2} \right] ^ {1 / 2} \leq h \sqrt {h} C \left(h \| A \| ^ {2} B ^ {2} + \| \Sigma_ {\epsilon} \| ^ {2} d\right) ^ {1 / 2}. +$$ + +On the other hand, + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \sqrt {\mathfrak {M}} \left(\tilde {b} _ {t _ {k}} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}}\right) - \tilde {b} _ {t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right)\right) \right\| ^ {2} \right] ^ {1 / 2} \\ \leq C \left(\| \tilde {A} _ {\epsilon} \| + \tilde {L} _ {T - t _ {k}}\right) \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {\mathfrak {M}} ^ {2} \right] ^ {1 / 2}. \\ \end{array} +$$ + +Therefore, + +$$ +\begin{array}{l} \mathbb {E} \left[ A _ {2, k} \right] \leq C h \sqrt {h} \left(\| \tilde {A} _ {\epsilon} \| + \tilde {L} _ {T - t _ {k}}\right) \left(h \| A \| ^ {2} B ^ {2} + \| \Sigma_ {\epsilon} \| ^ {2} d\right) ^ {1 / 2} \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {\mathfrak {M}} ^ {2} \right] ^ {1 / 2} \\ = C h \sqrt {h} \| \tilde {A} _ {\epsilon} \| \left(h \| A \| ^ {2} B ^ {2} + \| \Sigma_ {\epsilon} \| ^ {2} d\right) ^ {1 / 2} \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {\mathfrak {M}} ^ {2} \right] ^ {1 / 2} \\ + C h \sqrt {h} \tilde {L} _ {T - t _ {k}} \left(h \| A \| ^ {2} B ^ {2} + \| \Sigma_ {\epsilon} \| ^ {2} d\right) ^ {1 / 2} \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {\mathfrak {M}} ^ {2} \right] ^ {1 / 2}. \\ \end{array} +$$ + +Moreover, from Young's inequality, we get that, for all $a,b\geq 0$ and $\alpha >0$ + +$$ +a b \leq \frac {\alpha}{2} a ^ {2} + \frac {1}{2 \alpha} b ^ {2}. \tag {40} +$$ + +It follows that, + +$$ +\begin{array}{l} \mathbb {E} \left[ A _ {2, k} \right] \leq \frac {a - \eta}{6} h \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {\mathfrak {M}} ^ {2} \right] + h ^ {2} C \frac {\| \tilde {A} _ {\epsilon} \| ^ {2} \left(h \| A \| ^ {2} B ^ {2} + \| \Sigma_ {\epsilon} \| ^ {2} d\right)}{a - \eta} \\ + C h \tilde {L} _ {T - t _ {k}} \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {\mathfrak {M}} ^ {2} \right] + C h ^ {2} \tilde {L} _ {T - t _ {k}} \left(h \| A \| ^ {2} B ^ {2} + \| \Sigma_ {\epsilon} \| ^ {2} d\right). \\ \end{array} +$$ + +Bound of $\mathbb{E}[A_{3,k}]$ . Using Cauchy-Schwarz inequality, + +$$ +\mathbb {E} \left[ A _ {3, k} \right] \leq \mathbb {E} \left[ \left\| \int_ {t _ {k}} ^ {t _ {k + 1}} \left(\overleftarrow {\mathbf {U}} _ {t} - \overleftarrow {\mathbf {U}} _ {t _ {k}}\right) d t \right\| ^ {2} \right] ^ {1 / 2} \mathbb {E} \left[ \left\| \tilde {A} _ {\epsilon} ^ {\top} \mathfrak {M} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) \right\| ^ {2} \right] ^ {1 / 2}. +$$ + +On the one hand, + +$$ +\mathbb {E} \left[ \left\| \tilde {A} _ {\epsilon} ^ {\top} \mathfrak {M} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) \right\| ^ {2} \right] ^ {1 / 2} \leq C \| \tilde {A} _ {\epsilon} \| \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| ^ {2} \right] ^ {1 / 2}, +$$ + +and, on the other hand, using (39) yields, + +$$ +\mathbb {E} \left[ \left\| \int_ {t _ {k}} ^ {t _ {k + 1}} \left(\overleftarrow {\mathbf {U}} _ {t} - \overleftarrow {\mathbf {U}} _ {t _ {k}}\right) \mathrm {d} t \right\| ^ {2} \right] ^ {1 / 2} \leq C h \sqrt {h} \left(h \| A \| ^ {2} B ^ {2} + \| \Sigma_ {\epsilon} \| ^ {2} d\right) ^ {1 / 2}. +$$ + +Combining both and using (40) yield, + +$$ +\begin{array}{l} \mathbb {E} \left[ A _ {3, k} \right] \leq C h \sqrt {h} \| \tilde {A} _ {\epsilon} \| \sqrt {h \| A \| ^ {2} B ^ {2} + \| \Sigma_ {\epsilon} \| ^ {2} d} \times \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {\mathfrak {M}} ^ {2} \right] ^ {1 / 2} \\ \leq \frac {a - \eta}{6} h \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {\mathfrak {M}} ^ {2} \right] + h ^ {2} C \frac {\| \tilde {A} _ {\epsilon} \| ^ {2} (h \| A \| ^ {2} B ^ {2} + \| \Sigma_ {\epsilon} \| ^ {2} d)}{a - \eta}. \\ \end{array} +$$ + +Bound of $\mathbb{E}[A_{4,k}]$ . Using Cauchy-Schwarz inequality, + +$$ +\mathbb {E} \left[ A _ {4, k} \right] \leq C \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {\mathfrak {M}} ^ {2} \right] ^ {1 / 2} \mathbb {E} \left[ \left\| \int_ {t _ {k}} ^ {t _ {k + 1}} \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) - \tilde {\mathbf {s}} _ {T - t _ {k}} (\overleftarrow {\mathbf {U}} _ {t _ {k}})\right) \mathrm {d} t \right\| ^ {2} \right] ^ {1 / 2}. +$$ + +Therefore, using Cauchy-Schwarz inequality again, + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \int_ {t _ {k}} ^ {t _ {k + 1}} \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) - \tilde {\mathbf {s}} _ {T - t _ {k}} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}}\right)\right) \mathrm {d} t \right\| ^ {2} \right] ^ {1 / 2} \\ \leq \sqrt {h} \left(\int_ {t _ {k}} ^ {t _ {k + 1}} \mathbb {E} \left[ \left\| \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) - \tilde {\mathbf {s}} _ {T - t _ {k}} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}}\right)\right) \right\| ^ {2} \right] \mathrm {d} t\right) ^ {1 / 2}. \\ \end{array} +$$ + +Then, using Lemma D.8, + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) - \tilde {\mathbf {s}} _ {T - t _ {k}} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}}\right)\right) \right\| ^ {2} \right] \leq \| \Sigma_ {\epsilon} \| ^ {2} \mathbb {E} \left[ \left\| \nabla \log \tilde {p} _ {T - t} \left(\overrightarrow {\mathbf {U}} _ {t}\right) - \nabla \log \tilde {p} _ {T - t _ {k}} \left(\overrightarrow {\mathbf {U}} _ {t _ {k}}\right) \right\| ^ {2} \right] \\ \leq C \left\| \Sigma_ {\epsilon} \right\| ^ {2} \left(g \left(t _ {k + 1}\right) - g \left(t _ {k}\right)\right), \tag {41} \\ \end{array} +$$ + +with the function $g$ defined in (59). This yields + +$$ +\begin{array}{l} \mathbb {E} \left[ A _ {4, k} \right] \leq C h \| \Sigma_ {\epsilon} \| \sqrt {g (t _ {k + 1}) - g (t _ {k})} \times \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {\mathfrak {M}} ^ {2} \right] ^ {1 / 2} \\ \leq h \frac {a - \eta}{6} \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {\mathfrak {M}} ^ {2} \right] + h \frac {C \| \Sigma_ {\epsilon} \| ^ {2}}{a - \eta} (g (t _ {k + 1}) - g (t _ {k})), \\ \end{array} +$$ + +where we have used Young's inequality in the last inequality. + +Bound of $\mathbb{E}[A_{5,k}]$ . Using Cauchy-Schwarz inequality, + +$$ +\begin{array}{l} \mathbb {E} \left[ A _ {5, k} \right] = \int_ {t _ {k}} ^ {t _ {k + 1}} \mathbb {E} \left[ \left(\overleftarrow {\mathbf {U}} _ {t} - \overleftarrow {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} \mathfrak {M} \tilde {A} _ {\epsilon} \left(\overleftarrow {\mathbf {U}} _ {t} - \overleftarrow {\mathbf {U}} _ {t _ {k}}\right) \right] d t \\ \leq C \left\| \tilde {A} _ {\epsilon} \right\| \mathbb {E} \left[ \int_ {t _ {k}} ^ {t _ {k + 1}} \left\| \overleftarrow {\mathbf {U}} _ {s} - \overleftarrow {\mathbf {U}} _ {t _ {k}} \right\| ^ {2} d s \right]. \\ \end{array} +$$ + +Then using Lemma D.3 as in (39), + +$$ +\mathbb {E} \left[ A _ {5, k} \right] \leq C h ^ {2} \left\| \tilde {A} _ {\epsilon} \right\| \left(h \| A \| ^ {2} B ^ {2} + \| \Sigma_ {\epsilon} \| ^ {2} d\right), +$$ + +with $B$ defined as in 56. + +Bound of $\mathbb{E}[A_{6,k}]$ . Using Cauchy-Schwarz inequality, + +$$ +\begin{array}{l} \mathbb {E} \left[ A _ {6, k} \right] = \int_ {t _ {k}} ^ {t _ {k + 1}} \mathbb {E} \left[ \left(\overleftarrow {\mathbf {U}} _ {t} - \overleftarrow {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} \mathfrak {M} \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) - \tilde {\mathbf {s}} _ {T - t _ {k}} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}}\right)\right) \right] d t \\ \leq C \int_ {t _ {k}} ^ {t _ {k + 1}} \mathbb {E} \left[ \left\| \overline {{\mathbf {U}}} _ {t} - \overline {{\mathbf {U}}} _ {t _ {k}} \right\| ^ {2} \right] ^ {1 / 2} \mathbb {E} \left[ \left\| \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t} \left(\overline {{\mathbf {U}}} _ {t}\right) - \tilde {\mathbf {s}} _ {T - t _ {k}} \left(\overline {{\mathbf {U}}} _ {t _ {k}}\right)\right) \right\| ^ {2} \right] ^ {1 / 2} d t. \\ \end{array} +$$ + +Controlling the first term as in (39) and the second term as in (41), using Lemma D.8, together with Young's inequality, yields, + +$$ +\mathbb {E} \left[ A _ {6, k} \right] \leq C h ^ {2} (a - \eta) \left(h \| A \| ^ {2} B ^ {2} + \| \Sigma_ {\epsilon} \| ^ {2} d\right) + C h \frac {\| \Sigma_ {\epsilon} \| ^ {2} \left(g (t _ {k + 1}) - g (t _ {k})\right)}{a - \eta}. +$$ + +Final bound. Combining the upper bounds for $A_{1,k}, A_{2,k}, A_{3,k}, A_{4,k}, A_{5,k}$ and $A_{6,k}$ , there exists a constant $C > 0$ such that + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {t _ {k + 1}} - \bar {\mathbf {U}} _ {t _ {k + 1}} \right\| _ {\mathfrak {M}} ^ {2} \right] \leq \delta_ {k} \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {\mathfrak {M}} ^ {2} \right] + C h \frac {\left\| \Sigma_ {\epsilon} \right\| ^ {2}}{a - \eta} (g (t _ {k + 1}) - g (t _ {k})) \\ + C h ^ {2} \left(h \| A \| ^ {2} B ^ {2} + \| \Sigma_ {\epsilon} \| ^ {2} d\right) \left((a - \eta) + \frac {a - \eta + 2}{a - \eta} \| \tilde {A} _ {\epsilon} \| \vee \| \tilde {A} _ {\epsilon} \| ^ {2}\right) \\ + C h ^ {2} \tilde {L} _ {T - t _ {k}} \left(h \| A \| ^ {2} B ^ {2} + \| \Sigma_ {\epsilon} \| ^ {2} d\right), \\ \end{array} +$$ + +with $\delta_{k} := 1 + h(C\tilde{L}_{T - t_{k}} - (a - \eta) / 2)$ . Therefore, + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {t _ {N}} - \bar {\mathbf {U}} _ {t _ {N}} \right\| _ {\mathfrak {M}} ^ {2} \right] \leq \left(\prod_ {k = 0} ^ {N - 1} \delta_ {k}\right) \mathbb {E} \left[ \left\| \overleftarrow {\mathbf {U}} _ {0} - \bar {\mathbf {U}} _ {0} \right\| _ {\mathfrak {M}} ^ {2} \right] \\ + C h ^ {2} \left(h \| A \| ^ {2} B ^ {2} + \| \Sigma_ {\epsilon} \| ^ {2} d\right) \binom {(a - \eta) + \frac {a - \eta + 2}{a - \eta} \| \tilde {A} _ {\epsilon} \| \vee \| \tilde {A} _ {\epsilon} \| ^ {2}} {N - 1} \sum_ {k = 0} ^ {N - 1} \prod_ {j = k + 1} ^ {N - 1} \delta_ {j} \tag {42} \\ + C h \frac {\left\| \Sigma_ {\epsilon} \right\| ^ {2}}{a - \eta} \sum_ {k = 0} ^ {N - 1} (g (t _ {k + 1}) - g (t _ {k})) \prod_ {j = k + 1} ^ {N - 1} \delta_ {j} \\ + C h ^ {2} \left(h \| A \| ^ {2} B ^ {2} + \| \Sigma_ {\epsilon} \| ^ {2} d\right) \sum_ {k = 0} ^ {N - 1} \tilde {L} _ {T - t _ {k}} \prod_ {j = k + 1} ^ {N - 1} \delta_ {j}. \\ \end{array} +$$ + +First recall that the two processes share the same initialization, i.e. $\overleftarrow{\mathbf{U}}_0 = \bar{\mathbf{U}}_0$ . Note that, since $\exp(x) \geq 1 + x$ , for $x \in \mathbb{R}$ , + +$$ +\begin{array}{l} \prod_ {j = k + 1} ^ {N - 1} \delta_ {j} \leq \exp \left(\sum_ {j = k + 1} ^ {N - 1} h \left(C \tilde {L} _ {T - t _ {j}} - \frac {a - \eta}{2}\right)\right) \\ \leq \exp \left(- \frac {a - \eta}{2} h (N - k - 1) + C \sum_ {j = k + 1} ^ {N - 1} h \tilde {L} _ {T - t _ {j}}\right) \\ \leq \exp \left(- \frac {a - \eta}{2} h (N - k - 1) + C \int_ {0} ^ {\infty} \tilde {L} _ {s} d s\right) \\ \leq \exp \left(- \frac {a - \eta}{2} h (N - k - 1) + C a ^ {- 1}\right), \\ \end{array} +$$ + +where we use the bound (26) from Proposition B.1. Combining this bound with the fact that $h \leq (1 - \mathrm{e}^{-h})\mathrm{e}^{h}$ , we obtain + +$$ +h \exp \left(- \frac {a - \eta}{2} (N - k) h\right) \leq \frac {2 \mathrm {e} ^ {h}}{a - \eta} \left(\exp \left(- \frac {a - \eta}{2} (N - k) h\right) - \exp \left(- \frac {a - \eta}{2} (N - k + 1) h\right)\right), +$$ + +which then implies that + +$$ +\begin{array}{l} h \sum_ {k = 0} ^ {N - 1} \prod_ {j = k + 1} ^ {N - 1} \delta_ {j} \leq h \sum_ {k = 0} ^ {N - 1} \mathrm {e} ^ {- \frac {a - \eta}{2} (N - k - 2) h} \times \mathrm {e} ^ {C a ^ {- 1}} \\ \leq \frac {2 \mathrm {e} ^ {h}}{a - \eta} \sum_ {k = 0} ^ {N - 1} \left(\mathrm {e} ^ {- \frac {a - \eta}{2} (N - k - 2) h} - \mathrm {e} ^ {- \frac {a - \eta}{2} (N - k - 1) h}\right) \times \mathrm {e} ^ {C a ^ {- 1}} \leq \frac {C \mathrm {e} ^ {C a ^ {- 1}}}{a - \eta}, \\ \end{array} +$$ + +increasing the value of the constant $C$ if necessary. For the term involving $\tilde{L}_{T - t_k}$ , note that + +$$ +\begin{array}{l} h \tilde {L} _ {T - t _ {k}} \exp \left(- \frac {a - \eta}{2} (N - k) h\right) \\ \leq h \frac {C}{\sqrt {(N - k) h}} \exp \left(- \frac {a - \eta}{2} (N - k) h\right) \\ \leq \frac {2 \mathrm {e} ^ {h}}{a - \eta} \left(\Gamma \left(\frac {1}{2}, \frac {a - \eta}{2} (N - k) h\right) - \Gamma \left(\frac {1}{2}, \frac {a - \eta}{2} (N - k + 1) h\right)\right), \\ \end{array} +$$ + +where $\Gamma (a,b)$ denotes the Gamma function. Consequently, + +$$ +\begin{array}{l} h \sum_ {k = 0} ^ {N - 1} \tilde {L} _ {T - t _ {k}} \prod_ {j = k + 1} ^ {N - 1} \delta_ {j} \\ \leq \frac {2 \mathrm {e} ^ {h}}{a - \eta} \sum_ {k = 0} ^ {N - 1} \left(\Gamma \left(\frac {1}{2}, \frac {a - \eta}{2} (N - k - 2) h\right) - \Gamma \left(\frac {1}{2}, \frac {a - \eta}{2} (N - k - 1) h\right)\right) \times \mathrm {e} ^ {C a ^ {- 1}} \\ \leq \frac {C \mathrm {e} ^ {C a ^ {- 1}}}{a - \eta}. \\ \end{array} +$$ + +Moreover, we have that + +$$ +\begin{array}{l} \sum_ {k = 0} ^ {N - 1} \left(g \left(t _ {k + 1}\right) - g \left(t _ {k}\right)\right) \prod_ {j = k + 1} ^ {N - 1} \delta_ {j} \leq \mathrm {e} ^ {C a ^ {- 1}} \sum_ {k = 0} ^ {N - 1} \left(g \left(t _ {k + 1}\right) - g \left(t _ {k}\right)\right) \leq \mathrm {e} ^ {C a ^ {- 1}} g \left(t _ {N}\right) \\ \leq C \mathrm {e} ^ {C a ^ {- 1}} \mathbb {E} \left[ \left\| \tilde {\mathbf {s}} _ {\Delta} \left(\overrightarrow {\mathbf {U}} _ {\Delta}\right) \right\| ^ {2} \right]. \\ \end{array} +$$ + +Note that $\mathbb{E}[\| \tilde{\mathbf{s}}_{\Delta}(\overrightarrow{\mathbf{U}}_{\Delta})\|^{2}]$ corresponds to the relative Fisher information between $p_{\Delta}$ and $\pi_{\infty}$ . We can conclude for $\Delta \to 0$ , following the argument of Conforti et al. (Lemma 3.9, 2025) and using Assumption H1, that $\mathcal{I}(\pi_{\mathrm{data}}\otimes \pi_v|\pi_\infty) = \mathbb{E}[\| \tilde{\mathbf{s}}_0(\overrightarrow{\mathbf{U}}_0)\|^2 ] < \infty$ . Then, applying (42) directly yields + +$$ +\mathbb {E} \left[ \left\| \overline {{\mathbf {U}}} _ {t _ {N}} - \bar {\mathbf {U}} _ {t _ {N}} \right\| _ {\mathfrak {M}} ^ {2} \right] \leq h \times C \left(h \| A \| ^ {2} B ^ {2} + \| \Sigma_ {\epsilon} \| ^ {2} (d + \mathcal {I} (\pi_ {\mathrm {d a t a}} \otimes \pi_ {v} | \pi_ {\infty}))\right) \frac {\mathrm {e} ^ {C a ^ {- 1}}}{a - \eta}, +$$ + +which concludes the proof. + +![](images/423033fa52d843fd73d152d3ae9f3a02bb5617129055f50e8dd57bc040a3a636.jpg) + +Lemma B.3 (Approximation error). Assume that Assumptions H2 and H3 hold. Then, for any $\eta > 0$ , there exists a constant $C > 0$ such that + +$$ +\mathcal {W} _ {2} \left(\mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\infty}\right), \mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\theta}\right)\right) \leq C \frac {\left\| \Sigma_ {\epsilon} \right\| ^ {2}}{a - \eta} M. \tag {43} +$$ + +Proof. As in the proof of Lemma B.2, consider the synchronous coupling of the two processes $\bar{\mathbf{U}}^{\infty}$ and $\bar{\mathbf{U}}^{\theta}$ with the same initial condition $\bar{\mathbf{U}}_0^\infty = \bar{\mathbf{U}}_0^\theta$ . We have + +$$ +\mathcal {W} _ {2} \left(\mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\infty}\right), \mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\theta}\right)\right) \leq \left\| \bar {\mathbf {U}} _ {T} ^ {\infty} - \bar {\mathbf {U}} _ {T} ^ {\theta} \right\| _ {L _ {2}}. +$$ + +Fix $\Delta \geq 0$ such that $t_N = T - \Delta$ and note that for all $0 \leq k \leq N - 1$ , from (12) and (13), we get + +$$ +\begin{array}{l} \bar {\mathbf {U}} _ {t _ {k + 1}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k + 1}} ^ {\theta} \\ = \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta} + \int_ {t _ {k}} ^ {t _ {k + 1}} \left\{\tilde {A} _ {\epsilon} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right) + \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty}\right) - \tilde {s} _ {\theta} \left(T - t _ {k}, \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right)\right) \right\} d t. \\ \end{array} +$$ + +Taking $\mathfrak{M}$ as in the proof of Lemma B.2, we have + +$$ +\left\| \bar {\mathbf {U}} _ {t _ {k + 1}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k + 1}} ^ {\theta} \right\| _ {\mathfrak {M}} ^ {2} = \left\| \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta} \right\| _ {\mathfrak {M}} ^ {2} + 2 B _ {1, k} + 2 B _ {2, k}, +$$ + +with + +$$ +\begin{array}{l} B _ {1, k} = h \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right) ^ {\top} \mathfrak {M} \tilde {A} _ {\epsilon} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right), \\ B _ {2, k} = h \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right) ^ {\top} \mathfrak {M} \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty}\right) - \tilde {s} _ {\theta} \left(T - t _ {k}, \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right)\right). \\ \end{array} +$$ + +Bound of $\mathbb{E}[B_{1,k}]$ . From (38), we note that + +$$ +\mathbb {E} \left[ B _ {1, k} \right] \leq - h (a - \eta) \mathbb {E} \left[ \left| \left\| \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta} \right\| _ {\mathfrak {M}} ^ {2} \right. \right]. +$$ + +Bound of $\mathbb{E}[B_{2,k}]$ . We decompose the second term into the score and approximation components: + +$$ +\begin{array}{l} \mathbb {E} \left[ B _ {2, k} \right] \\ = h \mathbb {E} \left[ \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right) ^ {\top} \mathfrak {M} \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty}\right) - \tilde {\mathbf {s}} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right)\right) \right] \\ + h \mathbb {E} \left[ \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right) ^ {\top} \mathfrak {M} \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right) - \tilde {s} _ {\theta} \left(T - t _ {k}, \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right)\right) \right] \\ \leq h C \tilde {L} _ {T - t _ {k}} \mathbb {E} \left[ \left\| \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta} \right\| _ {\mathfrak {M}} ^ {2} \right] + h \| \Sigma_ {\epsilon} \| M \mathbb {E} \left[ \left\| \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta} \right\| _ {\mathfrak {M}} \right] \\ \leq h C \tilde {L} _ {T - t _ {k}} \mathbb {E} \left[ \left\| \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta} \right\| _ {\mathfrak {M}} ^ {2} \right] + h \frac {a - \eta}{2} \mathbb {E} \left[ \left\| \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta} \right\| _ {\mathfrak {M}} ^ {2} \right] + h C \left\| \Sigma_ {\epsilon} \right\| ^ {4} M ^ {2}, \\ \end{array} +$$ + +where we have used Young's inequality in the last inequality, for $C > 0$ a universal constant (which may change from line to line) depending only on the eigenvalues of the matrix $\mathfrak{M}$ or constant factors. Final bound. Combining the bounds on $B_{1,k}$ and $B_{2,k}$ , there exists a constant $C > 0$ such that + +$$ +\mathbb {E} \left[ \left\| \bar {\mathbf {U}} _ {t _ {k + 1}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k + 1}} ^ {\theta} \right\| _ {\mathfrak {M}} ^ {2} \right] \leq \delta_ {k} \mathbb {E} \left[ \left\| \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta} \right\| _ {\mathfrak {M}} ^ {2} \right] + h \frac {C}{a - \eta} \| \Sigma_ {\epsilon} \| ^ {4} M ^ {2}, +$$ + +with $\delta_k \coloneqq 1 + h\left(C\tilde{L}_{T - t_k} - (a - \eta) / 2\right)$ . Therefore, we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \bar {\mathbf {U}} _ {t _ {N}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {N}} ^ {\theta} \right\| _ {\mathfrak {M}} ^ {2} \right] \\ \leq \prod_ {j = k + 1} ^ {N - 1} \delta_ {j} \mathbb {E} \left[ \left\| \bar {\mathbf {U}} _ {0} ^ {\infty} - \bar {\mathbf {U}} _ {0} ^ {\theta} \right\| _ {\mathfrak {M}} ^ {2} \right] + h \frac {C}{a - \eta} \| \Sigma_ {\epsilon} \| ^ {4} M ^ {2} \sum_ {k = 0} ^ {N - 1} \prod_ {j = k + 1} ^ {N - 1} \delta_ {j} \\ \leq h \frac {C}{a - \eta} \left\| \Sigma_ {\epsilon} \right\| ^ {4} M ^ {2} \sum_ {k = 0} ^ {N - 1} \prod_ {j = k + 1} ^ {N - 1} \delta_ {j}, \\ \end{array} +$$ + +where we used that $\bar{\mathbf{U}}_0^\infty = \bar{\mathbf{U}}_0^\theta$ . Following the same argument as in Lemma B.2 (discretization error term), we obtain + +$$ +\mathbb {E} \left[ \left\| \bar {\mathbf {U}} _ {t _ {N}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {N}} ^ {\theta} \right\| _ {\mathfrak {M}} ^ {2} \right] \leq C \frac {\left\| \Sigma_ {\epsilon} \right\| ^ {4} M ^ {2}}{(a - \eta) ^ {2}}. +$$ + +We conclude the proof by taking the limit as $\Delta \to 0$ together with Fatou's lemma. + +![](images/6878c50bd987f487d8719b2624d08d92c0c5d5299a3487d63672e8503cf15112.jpg) + +Lemma B.4 (Mixing time). Assume that $H2$ holds. Then, for all $\eta > 0$ , there exists a constant $C > 0$ such that + +$$ +\mathcal {W} _ {2} \left(\mathcal {L} \left(\bar {\mathbf {U}} _ {T}\right), \mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\infty}\right)\right) \leq C \mathrm {e} ^ {C a ^ {- 1}} \times T e ^ {- \frac {3}{2} (a - \eta) T} \mathcal {W} _ {2} \left(\pi_ {\text {d a t a}} \otimes \pi_ {v}, \pi_ {\infty}\right). \tag {44} +$$ + +Proof. Consider a synchronous coupling of the continuous-time interpolations $(\bar{\mathbf{U}}_t)_{t\in [0,T]}$ and $(\bar{\mathbf{U}}_t^\infty)_{t\in [0,T]}$ , defined in (12), with initialization + +$$ +\mathcal {W} _ {2} \left(\pi_ {\infty}, \mathcal {L} \left(\overrightarrow {\mathbf {U}} _ {T}\right)\right) = \left\| \bar {\mathbf {U}} _ {0} - \bar {\mathbf {U}} _ {0} ^ {\infty} \right\| _ {L _ {2}}. +$$ + +By definition of the $\mathcal{W}_2$ distance, + +$$ +\mathcal {W} _ {2} \left(\mathcal {L} \left(\bar {\mathbf {U}} _ {T}\right), \mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\infty}\right)\right) \leq \left\| \bar {\mathbf {U}} _ {T} - \bar {\mathbf {U}} _ {T} ^ {\infty} \right\| _ {L _ {2}}. +$$ + +Analogously to the proof of Lemma B.2 and Lemma B.3, fix $\Delta \geq 0$ such that $t_N = T - \delta$ , and note that for $t \in [t_k, t_{k+1}]$ , we have that + +$$ +\left\| \bar {\mathbf {U}} _ {t _ {k + 1}} - \bar {\mathbf {U}} _ {t _ {k + 1}} ^ {\infty} \right\| _ {\mathfrak {M}} ^ {2} = \left\| \bar {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} \right\| _ {\mathfrak {M}} ^ {2} + C _ {k}, +$$ + +with + +$$ +C _ {k} = h 2 \left(\bar {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty}\right) ^ {\top} \mathfrak {M} \left\{\tilde {A} _ {\epsilon} \left(\bar {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty}\right) + \Sigma_ {\epsilon} ^ {2} \left(\tilde {\mathbf {s}} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right) - \tilde {\mathbf {s}} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty}\right)\right) \right\}. +$$ + +Similarly to Lemma B.2, we use that, for any fixed $\eta > 0$ , we have + +$$ +\mathfrak {M} \tilde {A} _ {\epsilon} \preccurlyeq - (a - \eta) \mathfrak {M}. +$$ + +Therefore, + +$$ +\left(\bar {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty}\right) ^ {\top} \mathfrak {M} \tilde {A} _ {\epsilon} \left(\bar {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty}\right) \leq - (a - \eta) \left\| \bar {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} \right\| _ {\mathfrak {M}} ^ {2}, +$$ + +and using Proposition B.1 there exists $C > 0$ a universal constant (depending only on the eigenvalues of the matrix $\mathfrak{M}$ or constant factors) such that + +$$ +\left(\bar {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty}\right) ^ {\top} \mathfrak {M} \Sigma^ {2} \left(\tilde {\mathfrak {s}} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right) - \tilde {\mathfrak {s}} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty}\right)\right) \leq C \tilde {L} _ {T - t _ {k}} \left\| \bar {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} \right\| _ {\mathfrak {M}} ^ {2}, +$$ + +it follows that + +$$ +\mathbb {E} \left[ C _ {k} \right] \leq h \left(C \tilde {L} _ {T - t _ {k}} - (a - \eta)\right) \mathbb {E} \left[ \| \bar {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} \| _ {\mathfrak {M}} ^ {2} \right]. +$$ + +As a consequence, + +$$ +\mathbb {E} \left[ \left\| \bar {\mathbf {U}} _ {t _ {N}} - \bar {\mathbf {U}} _ {t _ {N}} ^ {\infty} \right\| _ {\mathfrak {M}} ^ {2} \right] \leq \mathbb {E} \left[ \left\| \bar {\mathbf {U}} _ {0} - \bar {\mathbf {U}} _ {0} ^ {\infty} \right\| _ {\mathfrak {M}} ^ {2} \right] \prod_ {\ell = 0} ^ {N - 1} \delta_ {\ell} ^ {\prime}, +$$ + +with $\delta_{\ell}^{\prime} = 1 + h(C\tilde{L}_{T - t_k} - (a - \eta))$ . Since $\exp (x)\geq 1 + x$ , for $x\in \mathbb{R}$ , we have that + +$$ +\begin{array}{l} \prod_ {\ell = 0} ^ {N - 1} \delta_ {\ell} ^ {\prime} \leq \mathrm {e} ^ {\sum_ {k = 0} ^ {N - 1} h \left(C \tilde {L} _ {T - t _ {k}} - (a - \eta)\right)} \\ \leq \mathrm {e} ^ {- (a - \eta) T + C \sum_ {k = 0} ^ {N - 1} h \tilde {L} _ {T - t _ {k}}} \\ \leq \mathrm {e} ^ {- (a - \eta) T + C \int_ {0} ^ {\infty} \tilde {L} _ {s} \mathrm {d} s} \\ \leq \mathrm {e} ^ {- (a - \eta) T + C a ^ {- 1}}, \\ \end{array} +$$ + +thus, + +$$ +\mathbb {E} \left[ \left\| \bar {\mathbf {U}} _ {t _ {N}} - \bar {\mathbf {U}} _ {t _ {N}} ^ {\infty} \right\| _ {\mathfrak {M}} ^ {2} \right] \leq \mathrm {e} ^ {C a ^ {- 1}} \mathrm {e} ^ {- (a - \eta) T} \mathbb {E} \left[ \left\| \bar {\mathbf {U}} _ {0} - \bar {\mathbf {U}} _ {0} ^ {\infty} \right\| _ {\mathfrak {M}} ^ {2} \right], +$$ + +which implies, taking the limit as $\Delta \to 0$ together with Fatou's lemma, that + +$$ +\mathcal {W} _ {2} ^ {2} \left(\mathcal {L} \left(\bar {\mathbf {U}} _ {T}\right), \mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\infty}\right)\right) \leq C \mathrm {e} ^ {C a ^ {- 1}} \mathrm {e} ^ {- (a - \eta) T} \left\| \bar {\mathbf {U}} _ {0} - \bar {\mathbf {U}} _ {0} ^ {\infty} \right\| _ {L _ {2}} ^ {2}. +$$ + +Moreover, similarly to the backward, the forward process also satisfies the following contraction property (Lemma A.4), + +$$ +\left\| \bar {\mathbf {U}} _ {0} - \bar {\mathbf {U}} _ {0} ^ {\infty} \right\| _ {L _ {2}} = \mathcal {W} _ {2} \left(\pi_ {\infty}, \mathcal {L} \left(\overrightarrow {\mathbf {U}} _ {T}\right)\right) \leq C T \mathrm {e} ^ {- (a - \eta) T} \mathcal {W} _ {2} \left(\pi_ {\mathrm {d a t a}} \otimes \pi_ {v}, \pi_ {\infty}\right), +$$ + +yielding (44). + +![](images/8f4bdc0289b72a5f986e66b75759f6c964b36fac3a14c8f0421bbec864f8bddc.jpg) + +# C Proof of Theorem 3.2 + +In this section, we prove Theorem 3.2. To establish this result, we work with the (unmodified) score function rather than the modified one used previously. Similarly to the previous section, we introduce + +the continuous time interpolation $(\bar{\mathbf{U}}_t)_{t\in [0,T]}$ of the Euler scheme for the time-reversed process $(\overleftarrow{\mathbf{U}}_t)_{t\in [0,T]}$ defined as the Ito process, for $t\in [t_k,t_{k + 1}]$ + +$$ +\bar {\mathbf {U}} _ {t} = \bar {\mathbf {U}} _ {t _ {k}} + \left(- A \bar {\mathbf {U}} _ {t _ {k}} + \Sigma_ {\varepsilon} ^ {2} \mathbf {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right)\right) (t - t _ {k}) + \Sigma_ {\varepsilon} \left(B _ {t} - B _ {t _ {k}}\right), \tag {45} +$$ + +when initialized at $p_T$ (i.e., $\bar{\mathbf{U}}_0 \sim p_T$ ). When initialized at $\pi_{\infty}$ , we write $(\bar{\mathbf{U}}_t^\infty)_{t \in [0,T]}$ this Itô process. We also introduce the continuous time Euler scheme $(\bar{\mathbf{U}}_t^\theta)_{t \in [0,T]}$ in which the true, unknown score function is replaced by a neural network approximation $s_\theta$ , and defined for $t \in [t_k, t_{k+1}]$ as + +$$ +\bar {\mathbf {U}} _ {t} ^ {\theta} = \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta} + \left(- A \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta} + \Sigma_ {\varepsilon} ^ {2} s _ {\theta} \left(t _ {k}, \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right)\right) (t - t _ {k}) + \Sigma_ {\varepsilon} \left(B _ {t} - B _ {t _ {k}}\right), \tag {46} +$$ + +where $\bar{\mathbf{U}}_0^\theta \sim \pi_\infty$ + +We first establish the propagation of regularity properties: strong log-concavity propagation (Proposition C.1) and Lipschitz regularity propagation (Proposition C.2), followed by the proof of Theorem 3.2. To this end, we decompose the generation error into the sum of the discretization error (Lemma C.3), the approximation error (Lemma C.4), and the mixing time error (Lemma C.5), as in Theorem 3.1. + +# C.1 Propagation of the regularity assumptions + +Proposition C.1. Assume that $H2'$ holds. Then for all $t \in [0, T]$ and all $u \in \mathbb{R}^{2d}$ , + +$$ +\nabla^ {2} \log p _ {t} (u) \preccurlyeq - \alpha_ {t} \mathbf {I} _ {2 d}, +$$ + +where + +$$ +\alpha_ {t} = \left(\frac {1}{\left(\alpha_ {0} \wedge v ^ {- 2}\right) \sigma_ {\min } ^ {2} \left(e ^ {- t A}\right)} + \lambda_ {\max } \left(\Sigma_ {0, t}\right)\right) ^ {- 1}. \tag {47} +$$ + +Proof. Similar to Proposition B.1 recall the following equality in law given by the modified kinetic OU process (9) + +$$ +\overrightarrow {\mathbf {U}} _ {t} \stackrel {{\mathcal {L}}} {{=}} \mathrm {e} ^ {t A} \overrightarrow {\mathbf {U}} _ {0} + \sqrt {\Sigma_ {0 , t}} G, +$$ + +with $\vec{\mathbf{U}}_0\sim \pi_{\mathrm{data}}\otimes \pi_v,G\sim \mathcal{N}(0,\mathbf{I}_{2d})$ , where $G$ and $\vec{\mathbf{U}}_0$ are independent, and $\Sigma_{0,t}$ is defined in (20). Writing $q_{t|0}$ the conditional density of $\vec{\mathbf{U}}_t$ given $\vec{\mathbf{U}}_0$ , we have + +$$ +p _ {t} (u _ {t}) = \det \left(\mathrm {e} ^ {- t A}\right) \int_ {\mathbb {R} ^ {2 d}} p _ {0} \left(\mathrm {e} ^ {- t A} z\right) \det \left(2 \pi \Sigma_ {0, t}\right) ^ {- 1 / 2} \exp \left(- \frac {1}{2} \left(u _ {t} - z\right) ^ {\top} \Sigma_ {0, t} ^ {- 1} \left(u _ {t} - z\right)\right) \mathrm {d} z. +$$ + +Since $\pi_{\mathrm{data}}$ is $\alpha_0$ -strongly log-concave and $\pi_v$ is a centered Gaussian with covariance $v^2\mathbf{I}_d$ , their product (i.e. the probability density function of $\overrightarrow{\mathbf{U}}_0$ ) satisfies + +$$ +\nabla^ {2} \log p _ {0} (z) \preccurlyeq - (\alpha_ {0} \wedge v ^ {- 2}) \mathbf {I} _ {2 d}. +$$ + +Consequently, for any $z\in \mathbb{R}^{2d}$ + +$$ +\nabla^ {2} \log p _ {0} (\mathrm {e} ^ {- t A} z) \preccurlyeq - (\alpha_ {0} \wedge v ^ {- 2}) (\mathrm {e} ^ {- t A}) ^ {\top} \mathrm {e} ^ {- t A}. +$$ + +Finally, using Saumard and Wellner (2014), $p_t$ is strongly log-concave with constant + +$$ +\alpha_ {t} = \left(\frac {1}{(\alpha_ {0} \wedge v ^ {- 2}) \sigma_ {\mathrm {m i n}} ^ {2} (e ^ {- t A})} + \lambda_ {\mathrm {m a x}} (\Sigma_ {0, t})\right) ^ {- 1}. +$$ + +![](images/b7a0f2ef310d85d000b3358b0765eb4809e1ee40282bb7df357ba61c1f9cede7.jpg) + +Proposition C.2. Assume that $H2'$ holds. Then, for all $t > 0$ , $\nabla \log p_t$ is $L_t$ -Lipschitz: for all $u \in \mathbb{R}^{2d}$ , + +$$ +\left\| \nabla^ {2} \log p _ {t} (u) \right\| \leq L _ {t} \leq \min \left\{\mathfrak {h} _ {1, t}; \mathfrak {h} _ {2, t} \right\}. +$$ + +where + +$$ +\mathfrak {h} _ {1, t} = \left(1 + (a + 1) ^ {2} t\right) ^ {2} \mathrm {e} ^ {2 t a} \max \left\{L _ {0}, v ^ {- 2} \right\} +$$ + +$$ +\mathfrak {h} _ {2, t} = \frac {4}{\left\lfloor \sigma^ {2} \operatorname* {m i n} \{a , 1 / a \} - \left(\sigma^ {2} \operatorname* {m a x} \{a , 1 / a \} + 5 \varepsilon^ {2} a ^ {- 1}\right) \mathrm {e} ^ {- 2 a t} \right\rfloor_ {+}}. +$$ + +Proof. Following "Step 1: Lower bound on $\nabla^2\log p_t$ in the proof of Proposition B.1, we obtain for all $t > 0$ + +$$ +\nabla^ {2} \log p _ {t} (u) \succcurlyeq - \min \left\{\mathfrak {h} _ {1, t}; \mathfrak {h} _ {2, t} \right\} \mathbf {I} _ {2 d}, +$$ + +where + +$$ +\mathfrak {h} _ {1, t} = \left(1 + (a + 1) ^ {2} t\right) ^ {2} \mathrm {e} ^ {2 t a} \max \left\{L _ {0}, v ^ {- 2} \right\} +$$ + +$$ +\mathfrak {h} _ {2, t} = \frac {4}{\left\lfloor \sigma^ {2} \operatorname* {m i n} \{a , 1 / a \} - \left(\sigma^ {2} \operatorname* {m a x} \{a , 1 / a \} + 5 \varepsilon^ {2} a ^ {- 1}\right) \mathrm {e} ^ {- 2 a t} \right\rfloor_ {+}}. +$$ + +Moreover, Proposition C.1 implies that + +$$ +\nabla^ {2} \log p _ {t} (u) \preccurlyeq - \alpha_ {t} \mathbf {I} _ {2 d} +$$ + +$$ +\prec 0 _ {2 d \times 2 d}, +$$ + +where $\alpha_{t}$ defined as in (47). Consequently, + +$$ +\left\| \nabla^ {2} \log p _ {t} (u) \right\| \leq \min \left\{\mathfrak {h} _ {1, t}; \mathfrak {h} _ {2, t} \right\}. +$$ + +![](images/99502b21bb4adb1394a9ed117710749a41adfb9b0ed16d28e0f9acda08bf6da9.jpg) + +# C.2 Proofs of the main results + +Lemma C.3 (Discretization error). Assume that $H2'$ holds and let $\varepsilon > 0$ . If the step size $h$ satisfies + +$$ +0 < h < \frac {2 \min _ {k} \alpha_ {t _ {k}} (\sigma^ {2} \wedge \varepsilon^ {2}) - (\sigma - \varepsilon) ^ {2} \max _ {k} L _ {t _ {k}} - (a + 1) ^ {2}}{\| A \| ^ {2} + (\varepsilon^ {4} + \sigma^ {4}) \max _ {k} L _ {t _ {k}} ^ {2} + 2 (\sigma^ {2} \vee \varepsilon^ {2}) \| A \| \max _ {k} L _ {t _ {k}}}, +$$ + +then, there exists $\delta_{\varepsilon} > 0$ such that $\mathcal{W}_2\left(\mathcal{L}\big(\overleftarrow{\mathbf{U}}_T\big),\mathcal{L}\big(\bar{\mathbf{U}}_T\big)\right)\leq 2\sqrt{h} C_a(\varepsilon) / \delta_\varepsilon$ where + +$$ +C _ {a} (\varepsilon) = \left(2 \| A \| ^ {4} B _ {\varepsilon} + 4 d \left(a ^ {2} \sigma^ {2} + \varepsilon\right) ^ {2} \Lambda_ {\varepsilon} ^ {*} (T)\right) h + 4 d \left(\| A \| ^ {2} + \sigma^ {4} \sup _ {t \in [ 0, T ]} L _ {T - t} ^ {2}\right), \tag {48} +$$ + +with + +$$ +\Lambda_ {\varepsilon} ^ {*} (T) = \min \left\{\frac {2 a \left(1 + (a + 1) ^ {2} T\right) ^ {2}}{\min \left\{\varepsilon^ {2} , \sigma^ {2} \right\}}, \frac {4}{\sigma^ {2} \min \{a , 1 / a \} - \left(\sigma^ {2} \max \left\{a , 1 / a \right\} + 5 a \varepsilon^ {- 2}\right) \mathrm {e} ^ {- 2 a T}} \right\}, +$$ + +such that $\sup_{T > 0}\Lambda_{\varepsilon}^{*}(T) < + \infty$ and + +$$ +B _ {\varepsilon} := \max _ {t \in [ 0, T ]} \left(1 + (a + 1) ^ {2} (T - t)\right) ^ {2} \mathrm {e} ^ {- 2 a (T - t)} \| \overrightarrow {\mathbf {U}} _ {0} \| _ {L ^ {2}} ^ {2} + \frac {d}{2} \left(\sigma^ {2} \max \{a, 1 / a \} + \frac {5 \varepsilon^ {2}}{a}\right). \tag {49} +$$ + +Proof. Consider a synchronous coupling for $(\overline{\mathbf{U}}_t)_{t\in [0,T]}$ and $(\bar{\mathbf{U}}_t)_{t\in [0,T]}$ i.e., use the same Brownian motion to drive the two processes, with the same initial point, i.e., $\overleftarrow{\mathbf{U}}_0 = \bar{\mathbf{U}}_0$ . Then it holds, that + +$$ +\mathcal {W} _ {2} \left(\mathcal {L} (\overleftarrow {\mathbf {U}} _ {T}), \mathcal {L} (\bar {\mathbf {U}} _ {T})\right) \leq \left\| \overleftarrow {\mathbf {U}} _ {T} - \bar {\mathbf {U}} _ {T} \right\| _ {L _ {2}}. +$$ + +Fix $\Delta \geq 0$ such that $t_N = T - \Delta$ and note that for all $0\le k\le N - 1$ + +$$ +\begin{array}{l} \left\| \overleftarrow {\mathbf {U}} _ {t _ {k + 1}} - \bar {\mathbf {U}} _ {t _ {k + 1}} \right\| _ {L _ {2}} \\ = \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} + \int_ {t _ {k}} ^ {t _ {k + 1}} \left\{- A \left(\overleftarrow {\mathbf {U}} _ {t} - \bar {\mathbf {U}} _ {t _ {k}}\right) + \Sigma_ {\varepsilon} ^ {2} \left(\mathbf {s} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) - \mathbf {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right)\right) \right\} d t \right\| _ {L _ {2}} \\ \leq A _ {1, k} + A _ {2, k}, \\ \end{array} +$$ + +where + +$$ +\begin{array}{l} A _ {1, k} = \left\| \overset {\leftarrow} {\mathbf {\bar {U}}} _ {t _ {k}} - \mathbf {\bar {U}} _ {t _ {k}} + \int_ {t _ {k}} ^ {t _ {k + 1}} \left\{- A \left(\overset {\leftarrow} {\mathbf {\bar {U}}} _ {t _ {k}} - \mathbf {\bar {U}} _ {t _ {k}}\right) + \Sigma_ {\varepsilon} ^ {2} \left(\mathsf {s} _ {T - t _ {k}} \left(\overset {\leftarrow} {\mathbf {\bar {U}}} _ {t _ {k}}\right) - \mathsf {s} _ {T - t _ {k}} \left(\mathbf {\bar {U}} _ {t _ {k}}\right)\right) \right\} \mathrm {d} t \right\| _ {L _ {2}}, \\ A _ {2, k} = \left\| \int_ {t _ {k}} ^ {t _ {k + 1}} \left\{- A \left(\overleftarrow {\mathbf {U}} _ {t} - \overleftarrow {\mathbf {U}} _ {t _ {k}}\right) + \Sigma_ {\varepsilon} ^ {2} \left(\mathsf {s} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) - \mathsf {s} _ {T - t _ {k}} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}}\right)\right) \right\} \mathrm {d} t \right\| _ {L _ {2}}. \\ \end{array} +$$ + +For the first term, note that, + +$$ +\begin{array}{l} A _ {1, k} ^ {2} = \left\| \left(\mathbf {I} _ {2 d} - h A\right) \left(\overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) + h \Sigma_ {\varepsilon} ^ {2} \left(\mathbf {s} _ {T - t _ {k}} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}}\right) - \mathbf {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right)\right) \right\| _ {L _ {2}} ^ {2} \\ = \left\| \left(\mathbf {I} _ {2 d} - h A\right) \left(\overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) \right\| _ {L _ {2}} ^ {2} + \left\| h \Sigma_ {\varepsilon} ^ {2} \left(\mathfrak {s} _ {T - t _ {k}} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}}\right) - \mathfrak {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right)\right) \right\| _ {L _ {2}} ^ {2} \\ + 2 h \mathbb {E} \left[ \left(\overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} \left(\mathbf {I} _ {2 d} - h A\right) ^ {\top} \Sigma_ {\varepsilon} ^ {2} \left(\mathbf {s} _ {T - t _ {k}} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}}\right) - \mathbf {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right)\right) \right]. \\ \end{array} +$$ + +By Proposition C.2, it follows that the score at time $t$ is $L_{t}$ -Lipschitz continuous for all $t \in [0,T]$ , in particular, + +$$ +\left\| h \Sigma_ {\varepsilon} ^ {2} \left(\mathsf {s} _ {T - t _ {k}} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}}\right) - \mathsf {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right)\right) \right\| _ {L _ {2}} ^ {2} \leq h ^ {2} (\varepsilon^ {4} + \sigma^ {4}) L _ {T - t _ {k}} ^ {2} \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {L _ {2}} ^ {2}. +$$ + +Therefore, + +$$ +\begin{array}{l} A _ {1, k} ^ {2} \leq \mathbb {E} \left[ \left(\overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} \left((\mathbf {I} _ {2 d} - h A) ^ {\top} (\mathbf {I} _ {2 d} - h A) + h ^ {2} \left(\varepsilon^ {4} + \sigma^ {4}\right) L _ {T - t _ {k}} ^ {2} \mathbf {I} _ {2 d}\right) \left(\overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) \right] \\ + 2 h \mathbb {E} \left[ \left(\overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} \left(\mathbf {I} _ {2 d} - h A\right) ^ {\top} \Sigma_ {\varepsilon} ^ {2} \left(\mathsf {s} _ {T - t _ {k}} \left(\overleftarrow {\mathbf {U}} _ {t _ {k}}\right) - \mathsf {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right)\right) \right]. \\ \end{array} +$$ + +For all $0 \leq t \leq T$ , let $C_{t,k} := \int_0^1 \nabla^2 \log p_t \left( \overline{\mathbf{U}}_{t_k} - \gamma \left( \overline{\mathbf{U}}_{t_k} - \overline{\mathbf{U}}_{t_k} \right) \right) \, \mathrm{d}\gamma$ and write $\mathbf{A}_h = \mathbf{I}_{2d} - hA$ so that, + +$$ +\begin{array}{l} A _ {1, k} ^ {2} \leq \mathbb {E} \left[ \left(\overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} \left(\mathbf {A} _ {h} ^ {\top} \mathbf {A} _ {h} + h ^ {2} (\varepsilon^ {4} + \sigma^ {4}) L _ {T - t _ {k}} ^ {2} \mathbf {I} _ {2 d} + 2 h \mathbf {A} _ {h} ^ {\top} \Sigma_ {\varepsilon} ^ {2} C _ {T - t _ {k}, k}\right) \left(\overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) \right] \\ \leq \left\| \overline {{\mathbf {U}}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {L _ {2}} ^ {2} + h \left(\overline {{\mathbf {U}}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right) ^ {\top} M _ {h} (\varepsilon) \left(\overline {{\mathbf {U}}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}}\right), \\ \end{array} +$$ + +where + +$$ +M _ {h} (\varepsilon) = - \left(A ^ {\top} + A\right) + 2 \Sigma_ {\varepsilon} ^ {2} C _ {T - t _ {k}, k} + h \left(A ^ {\top} A + \left(\varepsilon^ {4} + \sigma^ {4}\right) L _ {T - t _ {k}} ^ {2} \mathbf {I} _ {2 d} - 2 A ^ {\top} \Sigma_ {\varepsilon} ^ {2} C _ {T - t _ {k}, k}\right). +$$ + +In order to control $(\overleftarrow{\mathbf{U}}_{t_k} - \bar{\mathbf{U}}_{t_k})^\top M_h(\varepsilon)(\overleftarrow{\mathbf{U}}_{t_k} - \bar{\mathbf{U}}_{t_k})$ , it is enough to control the eigenvalues of $\tilde{M}_h(\varepsilon)$ where + +$$ +\begin{array}{l} \tilde {M} _ {h} (\varepsilon) = \frac {1}{2} (M _ {h} (\varepsilon) + M _ {h} (\varepsilon) ^ {\top}) \\ = - \left(A ^ {\top} + A\right) + \left(\Sigma_ {\varepsilon} ^ {2} C _ {T - t _ {k}, k} + C _ {T - t _ {k}, k} \Sigma_ {\varepsilon} ^ {2}\right) \\ + h \left\{A ^ {\top} A + \left(\varepsilon^ {4} + \sigma^ {4}\right) L _ {T - t _ {k}} ^ {2} \mathbf {I} _ {2 d} - \left(A ^ {\top} \Sigma_ {\varepsilon} ^ {2} C _ {T - t _ {k}, k} + C _ {T - t _ {k}, k} \Sigma_ {\varepsilon} ^ {2} A\right) \right\}. \\ \end{array} +$$ + +Noting that, + +$$ +\left(\Sigma_ {\varepsilon} ^ {2} C _ {T - t _ {k}, k} + C _ {T - t _ {k}, k} \Sigma_ {\varepsilon} ^ {2}\right) = 2 \Sigma_ {\varepsilon} C _ {T - t _ {k}, k} \Sigma_ {\varepsilon} + \Sigma_ {\varepsilon} ^ {2} C _ {T - t _ {k}, k} + C _ {T - t _ {k}, k} \Sigma_ {\varepsilon} ^ {2} - 2 \Sigma_ {\varepsilon} C _ {T - t _ {k}, k} \Sigma_ {\varepsilon} +$$ + +By Proposition C.1, + +$$ +\begin{array}{l} \Sigma_ {\varepsilon} C _ {T - t _ {k}, k} \Sigma_ {\varepsilon} \preccurlyeq - \alpha_ {T - t _ {k}} \lambda_ {\min } \left(\Sigma_ {\varepsilon} ^ {2}\right) \mathbf {I} _ {2 d} \\ \prec - \alpha_ {T - t _ {k}} \left(\sigma^ {2} \wedge \varepsilon^ {2}\right) \mathbf {I} _ {2 d}, \\ \end{array} +$$ + +and simple calculations yields + +$$ +\Sigma_ {\varepsilon} ^ {2} C _ {T - t _ {k}, k} + C _ {T - t _ {k}, k} \Sigma_ {\varepsilon} ^ {2} - 2 \Sigma_ {\varepsilon} C _ {T - t _ {k}, k} \Sigma_ {\varepsilon} = (\sigma - \varepsilon) ^ {2} \left( \begin{array}{c c} 0 _ {d \times d} & C _ {T - t _ {k}, k} ^ {1 2} \\ C _ {T - t _ {k}, k} ^ {1 2} & 0 _ {d \times d} \end{array} \right), +$$ + +where $C_{T - t_k,k}^{12}$ denotes the block anti diagonal elements of $C_{T - t_k}$ . Hence, + +$$ +\begin{array}{l} \Sigma_ {\varepsilon} ^ {2} C _ {T - t _ {k}, k} + C _ {T - t _ {k}, k} \Sigma_ {\varepsilon} ^ {2} - 2 \Sigma_ {\varepsilon} C _ {T - t _ {k}, k} \Sigma_ {\varepsilon} \preccurlyeq (\sigma - \varepsilon) ^ {2} \| C _ {T - t _ {k}, k} \| \mathbf {I} _ {2 d} \\ \preccurlyeq (\sigma - \varepsilon) ^ {2} L _ {T - t _ {k}} \mathbf {I} _ {2 d}, \\ \end{array} +$$ + +where we used Proposition C.2 in the last line. It follows that, + +$$ +\begin{array}{l} \tilde {M} _ {h} (\varepsilon) \preccurlyeq - \lambda_ {\min} (A ^ {\top} + A) \mathbf {I} _ {2 d} - 2 \alpha_ {T - t _ {k}} \left(\sigma^ {2} \wedge \varepsilon^ {2}\right) \mathbf {I} _ {2 d} + (\sigma - \varepsilon) ^ {2} L _ {T - t _ {k}} \mathbf {I} _ {2 d} \\ + h \left(\| A \| ^ {2} + (\varepsilon^ {4} + \sigma^ {4}) L _ {T - t _ {k}} ^ {2} + 2 \left(\sigma^ {2} \vee \varepsilon^ {2}\right) \| A \| L _ {T - t _ {k}}\right) \mathbf {I} _ {2 d}. \\ \end{array} +$$ + +Therefore, using that $\lambda_{\min}(A^\top + A) = -(a + 1)^2$ , $\tilde{M}_h(\varepsilon)$ is negative when $h$ is chosen so that + +$$ +h < \frac {2 \min _ {k} \alpha_ {t _ {k}} \left(\sigma^ {2} \wedge \varepsilon^ {2}\right) - (\sigma - \varepsilon) ^ {2} \max _ {k} L _ {t _ {k}} - (a + 1) ^ {2}}{\| A \| ^ {2} + \left(\varepsilon^ {4} + \sigma^ {4}\right) \max _ {k} L _ {t _ {k}} ^ {2} + 2 \left(\sigma^ {2} \vee \varepsilon^ {2}\right) \| A \| \max _ {k} L _ {t _ {k}}}. \tag {50} +$$ + +It follows that when $h$ satisfies (50), there exists $\delta_{\varepsilon} > 0$ such that + +$$ +A _ {1, k} \leq \sqrt {1 - h \delta_ {\varepsilon}} \left\| \overleftarrow {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {L _ {2}}. +$$ + +For the second term $A_{2,k}$ , note the backward drift function as $b(t,u) = -Au + \Sigma_{\varepsilon}^{2}\mathfrak{s}_{T - t}(u)$ so that + +$$ +A _ {2, k} ^ {2} = \mathbb {E} \left[ \left\| \int_ {t _ {k}} ^ {t _ {k + 1}} \left(b (t, \overleftarrow {\mathbf {U}} _ {t}) - b (t _ {k}, \overleftarrow {\mathbf {U}} _ {t _ {k}})\right) \mathrm {d} t \right\| ^ {2} \right]. +$$ + +Applying Lemma D.6 combining with Itô's formula, we obtain + +$$ +\begin{array}{l} \mathrm {d} b (t, \overleftarrow {\mathbf {U}} _ {t}) = - A \mathrm {d} \overleftarrow {\mathbf {U}} _ {t} + \Sigma_ {\varepsilon} ^ {2} \mathrm {d} \mathbf {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \\ = \left\{A A \overleftarrow {\mathbf {U}} _ {t} - A \Sigma_ {\varepsilon} ^ {2} \mathbf {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) + \Sigma_ {\varepsilon} ^ {2} A ^ {T} \mathbf {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \right\} d t + \left(A + \Sigma_ {\varepsilon} ^ {2} \nabla^ {2} \log p _ {T - t} (\overleftarrow {\mathbf {U}} _ {t})\right) \Sigma_ {\varepsilon} d B _ {t} \\ = A A \overleftarrow {\mathbf {U}} _ {t} \mathrm {d} t + \left(\Sigma_ {\varepsilon} ^ {2} A ^ {\top} - A \Sigma_ {\varepsilon} ^ {2}\right) \mathfrak {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \mathrm {d} t + \left(A + \Sigma_ {\varepsilon} ^ {2} \nabla^ {2} \log p _ {T - t} (\overleftarrow {\mathbf {U}} _ {t})\right) \Sigma_ {\varepsilon} \mathrm {d} B _ {t}. \\ \end{array} +$$ + +Using $H_{s} = AA\overleftarrow{\mathbf{U}}_{s} + \left(\Sigma_{\varepsilon}^{2}A^{\top} - A\Sigma_{\varepsilon}^{2}\right)\mathsf{s}_{T - s}\bigl (\overleftarrow{\mathbf{U}}_{s}\bigr)$ and $K_{s} = \Big(A + \Sigma_{\varepsilon}^{2}\nabla^{2}\log p_{T - s}(\overleftarrow{\mathbf{U}}_{s})\Big)\Sigma_{\varepsilon}$ we have that + +$$ +\begin{array}{l} A _ {2, k} ^ {2} = \mathbb {E} \left[ \left\| \int_ {t _ {k}} ^ {t _ {k + 1}} \int_ {t _ {k}} ^ {t} H _ {s} d s d t + \int_ {t _ {k}} ^ {t _ {k + 1}} \int_ {t _ {k}} ^ {t} K _ {s} d B _ {s} d t \right\| ^ {2} \right], \\ \leq 2 h \int_ {t _ {k}} ^ {t _ {k + 1}} \mathbb {E} \left[ \left\| \int_ {t _ {k}} ^ {t} H _ {s} \mathrm {d} s \right\| ^ {2} \mathrm {d} t \right] + 2 h ^ {2} \mathbb {E} \left[ \sup _ {t \in [ t _ {k}, t _ {k + 1} ]} \left\| \int_ {t _ {k}} ^ {t} K _ {s} \mathrm {d} B _ {s} \right\| ^ {2} \right], \\ \end{array} +$$ + +by convexity of $\| \cdot \| ^2$ . Using again the convexity (or applying Cauchy-Schwartz inequality) we have $\mathbb{E}[\| \int_{t_k}^t H_s\mathrm{d}s\| ^2 ]\leq h\int_{t_k}^t\mathbb{E}[\| H_s\| ^2 ]\mathrm{d}s$ and then + +$$ +A _ {2, k} ^ {2} \leq h ^ {4} \sup _ {t \in \left[ t _ {k}, t _ {k + 1} \right]} \mathbb {E} \left[ \| H _ {t} \| ^ {2} \right] + 2 h ^ {2} \mathbb {E} \left[ \sup _ {t \in \left[ t _ {k}, t _ {k + 1} \right]} \left\| \int_ {t _ {k}} ^ {t} K _ {s} \mathrm {d} B _ {s} \right\| ^ {2} \right]. \tag {51} +$$ + +First we have for $t \in [0, T]$ , + +$$ +\mathbb {E} \left[ \| H _ {t} \| ^ {2} \right] \leq 2 \| A \| _ {2} ^ {4} \mathbb {E} \left[ \| \overleftarrow {\mathbf {U}} _ {t} \| ^ {2} \right] + 2 \left\| \Sigma_ {\varepsilon} ^ {2} A ^ {\top} - A \Sigma_ {\varepsilon} ^ {2} \right\| ^ {2} \mathbb {E} \left[ \left\| \mathsf {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \right\| ^ {2} \right], +$$ + +and by Lemma D.3 and Lemma D.5 we get + +$$ +\mathbb {E} \left[ \| H _ {t} \| ^ {2} \right] \leq 2 \| A \| ^ {4} B _ {\varepsilon} + 2 \left\| \Sigma_ {\varepsilon} ^ {2} A ^ {\top} - A \Sigma_ {\varepsilon} ^ {2} \right\| ^ {2} \frac {2 d}{\lambda_ {\min} (\Sigma_ {0 , T - t})}, +$$ + +where $B_{\varepsilon}$ is defined in (49). By Lemma A.3 we get + +$$ +\max _ {t \in [ 0, T ]} \mathbb {E} \left[ \| H _ {t} \| ^ {2} \right] \leq 2 \| A \| ^ {4} B _ {\varepsilon} + 4 d \left(a ^ {2} \sigma^ {2} + \varepsilon\right) ^ {2} \Lambda_ {\varepsilon} ^ {*} (T), \tag {52} +$$ + +with + +$$ +\Lambda_ {\varepsilon} ^ {*} (T) = \min \left\{\frac {2 a \left(1 + (a + 1) ^ {2} T\right) ^ {2}}{\min \left\{\varepsilon^ {2} , \sigma^ {2} \right\}}, \frac {4}{\sigma^ {2} \min \{a , 1 / a \} - \left(\sigma^ {2} \max \{a , 1 / a \} + 5 a ^ {- 1} \varepsilon^ {2}\right) \mathrm {e} ^ {- 2 a T}} \right\}, +$$ + +such that $\sup_{T > 0}\Lambda_{\varepsilon}^{*}(T) < + \infty$ + +Now by Doob's inequality and Itô's isometry, we have + +$$ +\mathbb {E} \left[ \sup _ {t \in [ t _ {k}, t _ {k + 1} ]} \left\| \int_ {t _ {k}} ^ {t} K _ {s} \mathrm {d} B _ {s} \right\| ^ {2} \right] = \int_ {t _ {k}} ^ {t _ {k + 1}} \mathbb {E} \left[ \left\| A + \Sigma_ {\varepsilon} ^ {2} \nabla^ {2} \log p _ {T - s} (\overleftarrow {\mathbf {U}} _ {s}) \right\| _ {F} ^ {2} \right] \mathrm {d} s, +$$ + +where $\| \cdot \| _F$ is the Frobenius norm, so that + +$$ +\mathbb {E} \left[ \sup _ {t \in [ t _ {k}, t _ {k + 1} ]} \left\| \int_ {t _ {k}} ^ {t} K _ {s} \mathrm {d} B _ {s} \right\| ^ {2} \right] \leq h d \max _ {t \in [ t _ {k}, t _ {k + 1} ]} \mathbb {E} \left[ \left\| A + \Sigma_ {\varepsilon} ^ {2} \nabla^ {2} \log p _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \right\| ^ {2} \right]. +$$ + +Using the $L_{t}$ -Lipschitz continuous property of the score at time $t$ we have + +$$ +\mathbb {E} \left[ \sup _ {t \in \left[ t _ {k}, t _ {k + 1} \right]} \left\| \int_ {t _ {k}} ^ {t} K _ {s} \mathrm {d} B _ {s} \right\| ^ {2} \right] \leq 2 h d \left(\| A \| ^ {2} + \max \left\{\sigma^ {4}, \varepsilon^ {4} \right\} \sup _ {t \in \left[ t _ {k}, t _ {k + 1} \right]} L _ {T - t _ {k}} ^ {2}\right). \tag {53} +$$ + +Plugging (52) and (53) into (51) we obtain + +$$ +A _ {2, k} ^ {2} \leq C _ {a} (\varepsilon) h ^ {3} \tag {54} +$$ + +with $C_a(\varepsilon)$ defined in (48). + +Combining the bound on $A_{1,k}$ to the bound on $A_{2,k}$ yields, + +$$ +\left\| \stackrel {\leftarrow} {\mathbf {U}} _ {t _ {k + 1}} - \bar {\mathbf {U}} _ {t _ {k + 1}} \right\| _ {L _ {2}} \leq \sqrt {1 - h \delta_ {\varepsilon}} \left\| \stackrel {\leftarrow} {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} \right\| _ {L _ {2}} + h \sqrt {h} C _ {a} (\varepsilon). +$$ + +Using that $\overleftarrow{\mathbf{U}}_0 - \bar{\mathbf{U}}_0 = 0$ we have by induction + +$$ +\begin{array}{l} \left\| \overleftarrow {\mathbf {U}} _ {t _ {N}} - \bar {\mathbf {U}} _ {t _ {N}} \right\| _ {L _ {2}} \leq \sum_ {k = 0} ^ {N - 1} \prod_ {j = k + 1} ^ {N - 1} \left(1 - h \delta_ {\varepsilon}\right) ^ {1 / 2} h \sqrt {h} C _ {a} (\varepsilon), \\ \leq \frac {2}{\delta_ {\varepsilon}} \sqrt {h} C _ {a} (\varepsilon), \\ \end{array} +$$ + +since $\sqrt{1 - \delta_{\varepsilon}h}\leq 1 - h\delta_{\varepsilon} / 2$ . Letting $\Delta \rightarrow 0$ together with Fatou's lemma finishes the proof. + +![](images/d64d29a4afed76f08b1a31a78de70b27cb4acfc0fcac5a6da948f9a478135ae6.jpg) + +Lemma C.4 (Approximation error). Assume that $H2'$ and $H3$ hold. Then, there exists $\delta_{\varepsilon} > 0$ such that + +$$ +\mathcal {W} _ {2} \left(\mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\infty}\right), \mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\theta}\right)\right) \leq \frac {2}{\delta_ {\varepsilon}} \max \left\{\varepsilon^ {2}, \sigma^ {2} \right\} M. +$$ + +Proof. Note that + +$$ +\mathcal {W} _ {2} \left(\mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\infty}\right), \mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\theta}\right)\right) \leq \left\| \bar {\mathbf {U}} _ {T} ^ {\infty} - \bar {\mathbf {U}} _ {T} ^ {\theta} \right\| _ {L _ {2}}. +$$ + +Using a decomposition similar to that in C.3, with $t_N = T - \Delta$ , we obtain: + +$$ +\begin{array}{l} \left\| \bar {\mathbf {U}} _ {t _ {k + 1}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k + 1}} ^ {\theta} \right\| _ {L _ {2}} \\ = \left\| \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta} + \int_ {t _ {k}} ^ {t _ {k + 1}} - A \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right) + \Sigma_ {\varepsilon} ^ {2} \left(\mathbf {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty}\right) - s _ {\theta} \left(T - t _ {k}, \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right)\right) \mathrm {d} t \right\| _ {L _ {2}} \\ \leq \left\| \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta} + \int_ {t _ {k}} ^ {t _ {k + 1}} - A \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right) + \Sigma_ {\varepsilon} ^ {2} \left(\mathfrak {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty}\right) - \mathfrak {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right)\right) d t \right\| _ {L _ {2}} \\ + \left\| \int_ {t _ {k}} ^ {t _ {k + 1}} \Sigma_ {\varepsilon} ^ {2} \left(\mathsf {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right) - s _ {\theta} \left(T - t _ {k}, \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right)\right) \mathrm {d} t \right\| _ {L _ {2}} \\ \leq B _ {1, k} + B _ {2, k}. \\ \end{array} +$$ + +For the first term, note that, + +$$ +\begin{array}{l} B _ {1, k} ^ {2} = \left\| \left(\mathbf {I} _ {2 d} - h A\right) \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right) + h \Sigma_ {\varepsilon} ^ {2} \left(\mathsf {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty}\right) - \mathsf {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right)\right) \right\| _ {L _ {2}} ^ {2} \\ = \left\| \left(\mathbf {I} _ {2 d} - h A\right) \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right) \right\| _ {L _ {2}} ^ {2} + \left\| h \Sigma_ {\varepsilon} ^ {2} \left(\mathbf {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty}\right) - \mathbf {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right)\right) \right\| _ {L _ {2}} ^ {2} \\ + 2 h \mathbb {E} \left[ \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right) ^ {\top} \left(\mathbf {I} _ {2 d} - h A\right) ^ {\top} \Sigma_ {\varepsilon} ^ {2} \left(\mathsf {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty}\right) - \mathsf {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\theta}\right)\right) \right]. \\ \end{array} +$$ + +It follows that $B_{1,k}$ can be treated similarly to $A_{1,k}$ . Using H2, $\mathrm{H}2'$ , and for $h$ satisfying (50), we have + +$$ +B _ {1, k} \leq \sqrt {1 - h \delta_ {\varepsilon}} \left\| \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\theta} \right\| _ {L _ {2}}, +$$ + +where $\delta_{\varepsilon}$ is defined as in the proof of Lemma C.3. For $B_{2,k}$ , using Assumption H3, we get + +$$ +B _ {2, k} \leq h \| \Sigma_ {\epsilon} \| ^ {2} M \leq h \max \left\{\varepsilon^ {2}, \sigma^ {2} \right\} M. +$$ + +Finally, for $h$ satisfying (50), it follows from the same argument as in the proof of Lemma C.3 that + +$$ +\left\| \bar {\mathbf {U}} _ {t _ {N}} ^ {\infty} - \bar {\mathbf {U}} _ {t _ {N}} ^ {\theta} \right\| _ {L _ {2}} \leq \frac {2}{\delta_ {\varepsilon}} \max \left\{\varepsilon^ {2}, \sigma^ {2} \right\} M. +$$ + +Taking the limit as $\Delta \to 0$ , together with Fatou's lemma finishes the proof. + +![](images/bc1922b96e3df65e9bdc2556e5d87572fc4e24d9b08dc40d0c3061a37f3df20f.jpg) + +Lemma C.5 (Mixing time). Assume that $H2'$ holds. Then + +$$ +\mathcal {W} _ {2} \left(\mathcal {L} \left(\bar {\mathbf {U}} _ {T}\right), \mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\infty}\right)\right) \leq K _ {T} e ^ {- a T} \mathcal {W} _ {2} \left(\pi_ {\text {d a t a}} \otimes \pi_ {v}, \pi_ {\infty}\right), +$$ + +with + +$$ +K _ {T} := (1 + \max \{a + 1; a (a + 1) \} T). +$$ + +Proof. Consider a synchronous coupling of the continuous-time interpolations $(\bar{\mathbf{U}}_t)_{t\in [0,T]}$ and $(\bar{\mathbf{U}}_t^\infty)_{t\in [0,T]}$ , with initialization + +$$ +\mathcal {W} _ {2} \left(\pi_ {\infty}, \mathcal {L} \left(\overrightarrow {\mathbf {U}} _ {T}\right)\right) = \left\| \bar {\mathbf {U}} _ {0} - \bar {\mathbf {U}} _ {0} ^ {\infty} \right\| _ {L _ {2}}. +$$ + +By definition of the $\mathcal{W}_2$ distance, + +$$ +\mathcal {W} _ {2} \left(\mathcal {L} \left(\bar {\mathbf {U}} _ {T}\right), \mathcal {L} \left(\bar {\mathbf {U}} _ {T} ^ {\infty}\right)\right) \leq \left\| \bar {\mathbf {U}} _ {T} - \bar {\mathbf {U}} _ {T} ^ {\infty} \right\| _ {L _ {2}}. +$$ + +For $t \in [t_k, t_{k+1}]$ and with $t_N = T - \Delta$ we have that, + +$$ +\begin{array}{l} \left\| \bar {\mathbf {U}} _ {t _ {k + 1}} - \bar {\mathbf {U}} _ {t _ {k + 1}} ^ {\infty} \right\| _ {L _ {2}} \\ = \left\| \bar {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} + \int_ {t _ {k}} ^ {t _ {k + 1}} - A \left(\bar {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty}\right) + \Sigma^ {2} \left(\mathbf {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}}\right) - \mathbf {s} _ {T - t _ {k}} \left(\bar {\mathbf {U}} _ {t _ {k}} ^ {\infty}\right)\right) d t \right\| _ {L _ {2}} \\ \leq \left\| \bar {\mathbf {U}} _ {t _ {k}} - \bar {\mathbf {U}} _ {t _ {k}} ^ {\infty} \right\| _ {L _ {2}} \delta_ {k}, \\ \end{array} +$$ + +where $\delta_{k}$ is defined as in (50). As a consequence, + +$$ +\left\| \bar {\mathbf {U}} _ {T} - \bar {\mathbf {U}} _ {T} ^ {\infty} \right\| _ {L _ {2}} \leq \left\| \bar {\mathbf {U}} _ {0} - \bar {\mathbf {U}} _ {0} ^ {\infty} \right\| _ {L _ {2}} \prod_ {\ell = 0} ^ {N - 1} \delta_ {\ell}, +$$ + +where we let $\Delta \to 0$ together with Fatou's lemma. Finally, using Lemma A.4, yields + +$$ +\left\| \bar {\mathbf {U}} _ {0} - \bar {\mathbf {U}} _ {0} ^ {\infty} \right\| _ {L _ {2}} = \mathcal {W} _ {2} \left(\pi_ {\infty}, \mathcal {L} \left(\overrightarrow {\mathbf {U}} _ {T}\right)\right) \leq K _ {T} e ^ {- a T} \mathcal {W} _ {2} \left(\pi_ {\mathrm {d a t a}} \otimes \pi_ {v}, \pi_ {\infty}\right), +$$ + +which finishes the proof. + +![](images/1decc8415baf446b79a137303087467ef8757d36679e844266e8228b21da149a.jpg) + +# D Technical Lemmata + +Lemma D.1. Assume that $H2$ holds. Then, the data distribution $p_{\mathrm{data}}(x) \propto \exp(-(V(x) + H(x)))$ has sub-Gaussian tails, i.e., there exist constants $C, \kappa > 0$ such that + +$$ +p _ {\mathrm {d a t a}} (x) \leq C \exp (- \kappa \| x \| ^ {2}), \qquad x \in \mathbb {R} ^ {d}. +$$ + +In particular, $\pi_{\mathrm{data}}$ admits finite moments of all orders. + +Proof. By $\alpha$ -strong convexity of $V$ , for all $x, y \in \mathbb{R}^d$ , + +$$ +V (x) \geq V (y) + \nabla V (y) ^ {\top} (x - y) + \frac {\alpha}{2} \| x - y \| ^ {2}. +$$ + +Let $x^{*}$ denote the unique minimizer of $V$ , so that $\nabla V(x^{*}) = 0$ . Then, + +$$ +V (x) \geq V \left(x ^ {*}\right) + \frac {\alpha}{2} \| x - x ^ {*} \| ^ {2} \geq \frac {\alpha}{4} \| x \| ^ {2} - c _ {1}, +$$ + +for some constant $c_{1}\in \mathbb{R}$ . Since $H$ is $L$ -Lipschitz, we have + +$$ +H (x) \geq H \left(x ^ {*}\right) - L \| x - x ^ {*} \| \geq - L \| x \| + c _ {2}, +$$ + +for some $c_{2}\in \mathbb{R}$ . Combining these two inequalities yields, for some $C\in \mathbb{R}$ + +$$ +V (x) + H (x) \geq \frac {\alpha}{4} \| x \| ^ {2} - L \| x \| + C. +$$ + +Using Young's inequality $L\| x\| \leq \alpha \| x\| ^2 /8 + 2L^2 /\alpha$ , we obtain + +$$ +V (x) + H (x) \geq \frac {\alpha}{8} \| x \| ^ {2} - \frac {2 L ^ {2}}{\alpha} + C. +$$ + +Hence, up to a multiplicative constant, + +$$ +p _ {\mathrm {d a t a}} (x) \propto \exp (- (V (x) + H (x))) \leq C ^ {\prime} \exp \left(- \frac {\alpha}{8} \| x \| ^ {2}\right), +$$ + +for some $C' > 0$ which concludes the proof. + +![](images/cf3fd3a440b913871186fdd88523ecd2347a62f37bbe03afb377fdff58225112.jpg) + +Lemma D.2. Assume that $H2$ holds and that there exist $m \in \mathbb{N}$ and $C > 0$ such that, for all $x \in \mathbb{R}^d$ , + +$$ +\left\| \nabla V (x) \right\| \leq C \left(1 + \| x \| ^ {m}\right). \tag {55} +$$ + +Then, the relative Fisher Information between $\pi_0 = \pi_{\mathrm{data}}\otimes \pi_v$ (i.e. the initialization of the stochastic process defined in (4)) and $\pi_{\infty}$ is finite, i.e. + +$$ +\mathcal {I} (\pi_ {0} | \pi_ {\infty}) := \int \left\| \nabla \log \left(\frac {\mathrm {d} \pi_ {0}}{\mathrm {d} \pi_ {\infty}} (u)\right) \right\| ^ {2} \pi_ {0} (\mathrm {d} u) < \infty . +$$ + +Proof. From Assumption H2, together with the fact that $\pi_0 = \pi_{\mathrm{data}}\otimes \pi_v$ and $\pi_v\sim \mathcal{N}\left(0_d,v^2\mathbf{I}_d\right)$ + +$$ +p _ {0} (u) = p _ {\mathrm {d a t a}} (x) \mathcal {N} (y; 0 _ {d}, v ^ {2} \mathbf {I} _ {d}) \propto \mathrm {e} ^ {- (V (x) + H (x))} \mathrm {e} ^ {- \frac {\| y \| ^ {2}}{2 v ^ {2}}}. +$$ + +Therefore, the relative Fisher Information satisfies + +$$ +\begin{array}{l} \mathcal {I} (\pi_ {0} | \pi_ {\infty}) = \mathbb {E} \left[ \left\| - \binom {\nabla V (\vec {X} _ {0}) + \nabla H (\vec {X} _ {0})} {v ^ {- 2} \vec {V} _ {0}} + \Sigma_ {\infty} ^ {- 1} \binom {\vec {X} _ {0}} {\vec {V} _ {0}} \right\| ^ {2} \right] \\ \leq 2 \mathbb {E} \left[ \left\| - \left( \begin{array}{c} \nabla V (\vec {X} _ {0}) + \nabla H (\vec {X} _ {0}) \\ v ^ {- 2} \vec {V} _ {0} \end{array} \right) \right\| ^ {2} \right] + 2 \mathbb {E} \left[ \left\| \Sigma_ {\infty} ^ {- 1} \left( \begin{array}{c} \vec {X} _ {0} \\ \vec {V} _ {0} \end{array} \right) \right\| ^ {2} \right]. \\ \end{array} +$$ + +By Lemma D.1, $\pi_{\mathrm{data}}$ has sub-Gaussian tails, hence + +$$ +\mathbb {E} \left[ \left\| \Sigma_ {\infty} ^ {- 1} \left( \begin{array}{c} \overrightarrow {X} _ {0} \\ \overrightarrow {V} _ {0} \end{array} \right) \right\| ^ {2} \right] < \infty . +$$ + +Moreover, + +$$ +\mathbb {E} \left[ \left\| - \left( \begin{array}{c} \nabla V (\vec {X} _ {0}) + \nabla H (\vec {X} _ {0}) \\ v ^ {- 2} \vec {V} _ {0} \end{array} \right) \right\| ^ {2} \right] \leq 2 \mathbb {E} \left[ \left\| \nabla V (\vec {X} _ {0}) \right\| ^ {2} \right] + 2 \mathbb {E} \left[ \left\| \nabla H (\vec {X} _ {0}) \right\| ^ {2} \right] + v ^ {- 4} \mathbb {E} \left[ \left\| \vec {V} _ {0} \right\| ^ {2} \right] +$$ + +Since $\overrightarrow{V}_0$ is Gaussian, $\mathbb{E}[\| \overrightarrow{V}_0\|^2 ] < \infty$ , and by Assumption H2, $H$ is $L$ -Lipschitz, so that $\mathbb{E}[\| \nabla H(\overrightarrow{X}_0)\|^2 ]\leq L^2$ . Using (55), there exist $m\in \mathbb{N}$ and $C > 0$ such that + +$$ +\mathbb {E} \left[ \left\| \nabla V (\vec {X} _ {0}) \right\| \right] \leq C \left(1 + \mathbb {E} \left[ \left\| \vec {X} _ {0} \right\| ^ {m} \right]\right) < \infty , +$$ + +using sub-Gaussianity of $\pi_{\mathrm{data}}$ , which concludes the proof. + +![](images/de4dd6b269392f6aaff74d8449651a6b22129a3622f2817c957cb78e06b9db6f.jpg) + +Lemma D.3. Assume that $(\vec{\mathbf{U}}_t)_{t\in [0,T]}$ is solution to (9) and that $\vec{\mathbf{U}}_0$ admits a second order moment, then for all $0\leq t\leq T$ , then for all $\varepsilon \geq 0$ + +$$ +\left\| \overleftarrow {\mathbf {U}} _ {t} \right\| _ {L _ {2}} ^ {2} \leq (1 + (a + 1) ^ {2} (T - t)) ^ {2} \mathrm {e} ^ {- 2 a (T - t)} \left\| \overrightarrow {\mathbf {U}} _ {0} \right\| _ {L _ {2}} ^ {2} + \frac {d}{2} (\sigma^ {2} \max \{a, 1 / a \} + \frac {5 \varepsilon^ {2}}{a}) =: B. \tag {56} +$$ + +Proof. Note that, in distribution, + +$$ +\overrightarrow {\mathbf {U}} _ {t} \stackrel {{\mathcal {L}}} {{=}} \mathrm {e} ^ {t A} \overrightarrow {\mathbf {U}} _ {0} + \Sigma_ {0, t} ^ {1 / 2} G, +$$ + +with $\overrightarrow{\mathbf{U}}_0\sim \pi_{\mathrm{data}}\otimes \pi_v,G\sim \mathcal{N}(0,\mathbf{I}_{2d})$ , and where $G$ and $\overrightarrow{\mathbf{U}}_0$ are independent. Since $G$ and $\overrightarrow{\mathbf{U}}_0$ are independent, using time-reversal and sub-multiplicativity of matrix norms, we have that + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| \overleftrightarrow {\mathbf {U}} _ {T - t} \right\| ^ {2} \right] = \mathbb {E} \left[ \left\| \overrightarrow {\mathbf {U}} _ {t} \right\| ^ {2} \right] = \mathbb {E} \left[ \left\| \mathrm {e} ^ {t A} \overrightarrow {\mathbf {U}} _ {0} \right\| ^ {2} \right] + \mathbb {E} \left[ \left\| \Sigma_ {0, t} ^ {1 / 2} G \right\| ^ {2} \right] \\ \leq \left\| \mathrm {e} ^ {t A} \right\| ^ {2} \mathbb {E} \left[ \left\| \overrightarrow {\mathbf {U}} _ {0} \right\| ^ {2} \right] + \left\| \Sigma_ {0, t} ^ {1 / 2} \right\| ^ {2} \mathbb {E} \left[ \| G \| ^ {2} \right] \\ = \left\| \mathrm {e} ^ {t A} \right\| ^ {2} \mathbb {E} \left[ \left\| \overrightarrow {\mathbf {U}} _ {0} \right\| ^ {2} \right] + 2 d \lambda_ {\max } (\Sigma_ {0, t}). \\ \end{array} +$$ + +We conclude by applying Lemma A.1 to bound $\left\| \mathrm{e}^{tA}\right\|^2$ and Lemma A.3 to bound $\lambda_{\max}(\Sigma_{0,t})$ . + +![](images/7769f545e839683d65e76ed250d9af0d32ddd338242bea1fac979a3465bc0907.jpg) + +Remark D.4. Lemma D.3 holds true when $\overleftarrow{\mathbf{U}}_t$ is defined as in (4) by setting $\varepsilon = 0$ . + +Lemma D.5. Assume that $(\vec{\mathbf{U}}_t)_{t\in [0,T]}$ is solution to (9), then, + +$$ +\mathbb {E} \left[ \left\| \mathbf {s} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) \right\| ^ {2} \right] \leq \frac {2 d}{\lambda_ {\min } \left(\Sigma_ {0 , T - t}\right)}, +$$ + +where $\Sigma_{0,t}$ is defined in (20). + +Proof. By the time-reversal property, + +$$ +\mathbb {E} \left[ \left\| \mathbf {s} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) \right\| ^ {2} \right] = \mathbb {E} \left[ \left\| \mathbf {s} _ {T - t} \left(\overrightarrow {\mathbf {U}} _ {T - t}\right) \right\| ^ {2} \right]. +$$ + +Note that + +$$ +\mathsf {s} _ {T - t} \big (\overrightarrow {\mathbf {U}} _ {T - t} \big) = \mathbb {E} \left[ \Sigma_ {0, T - t} ^ {- 1} (e ^ {(T - t) A} \overrightarrow {\mathbf {U}} _ {0} - \overrightarrow {\mathbf {U}} _ {T - t}) | \overrightarrow {\mathbf {U}} _ {T - t} \right], +$$ + +then, using Jensen's inequality and the tower property, + +$$ +\mathbb {E} \left[ \left\| \mathbf {s} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) \right\| ^ {2} \right] \leq \mathbb {E} \left[ \left\| \Sigma_ {0, T - t} ^ {- 1} \left(e ^ {(T - t) A} \overrightarrow {\mathbf {U}} _ {0} - \overrightarrow {\mathbf {U}} _ {T - t}\right) \right\| ^ {2} \right]. +$$ + +Since $\overrightarrow{\mathbf{U}}_t \stackrel{\mathcal{L}}{=} \mathrm{e}^{tA} \overrightarrow{\mathbf{U}}_0 + \Sigma_{0,t}^{1/2} G$ with $\overrightarrow{\mathbf{U}}_0 \sim \pi_{\mathrm{data}} \otimes \pi_v$ , $G \sim \mathcal{N}(0, \mathbf{I}_{2d})$ , and where $G$ and $\overrightarrow{\mathbf{U}}_0$ are independent, we have + +$$ +\mathbb {E} \left[ \left\| \mathbf {s} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) \right\| ^ {2} \right] \leq \mathbb {E} \left[ \left\| \Sigma_ {0, T - t} ^ {- 1 / 2} G \right\| ^ {2} \right] = \operatorname {T r} \left(\Sigma_ {0, T - t} ^ {- 1}\right), +$$ + +which completes the proof. + +Lemma D.6. Assume that $(\overleftarrow{\mathbf{U}}_t)_{t\in [0,T]}$ is solution to the backward SDE associated with (9). Then, + +$$ +\mathrm {d} (\nabla \log p _ {T - t} (\overleftarrow {\mathbf {U}} _ {t})) = A ^ {\top} \nabla \log p _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \mathrm {d} t + \nabla^ {2} \log p _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \Sigma_ {\varepsilon} \mathrm {d} B _ {t}. +$$ + +Proof. The Fokker-Plank equation for the SDE defined in (4) yields, for $u \in \mathbb{R}^{2d}$ , + +$$ +\partial_ {t} p _ {t} (u) = - \operatorname {d i v} \left(A u p _ {t} (u)\right) + \frac {1}{2} \operatorname {d i v} \left(\Sigma_ {\varepsilon} ^ {2} \nabla p _ {t} (u)\right). \tag {57} +$$ + +First, using the notation introduced in (10), + +$$ +\begin{array}{l} \operatorname {d i v} (A u p _ {t} (u)) = \sum_ {i = 1} ^ {2 d} \frac {\partial A u p _ {t} (u)}{\partial u _ {i}} \\ = \sum_ {i = 1} ^ {2 d} \sum_ {j = 1} ^ {2 d} \frac {\partial}{\partial u _ {i}} A _ {i j} u _ {j} p _ {t} (u) \\ = \sum_ {i = 1} ^ {2 d} A _ {i i} p _ {t} (u) + \sum_ {i = 1} ^ {2 d} \sum_ {j = 1} ^ {2 d} A _ {i j} u _ {j} \frac {\partial}{\partial_ {i}} p _ {t} (u) \\ = \sum_ {i = 1} ^ {2 d} A _ {i i} p _ {t} (u) + \left(A u\right) ^ {\top} \nabla p _ {t} (u) \\ = \operatorname {T r} (A) p _ {t} (u) + (A u) ^ {\top} \nabla p _ {t} (u) \\ = p _ {t} (u) \left(\operatorname {T r} (A) + (A u) ^ {\top} \mathfrak {s} _ {t} (u)\right). \\ \end{array} +$$ + +Second, using the product rule for divergence, + +$$ +\begin{array}{l} \frac {1}{2} \operatorname {d i v} \left(\Sigma_ {\varepsilon} ^ {2} \nabla p _ {t} (u)\right) = \frac {1}{2} \operatorname {d i v} \left(\Sigma_ {\varepsilon} ^ {2} p _ {t} (u) \mathbf {s} _ {t} (u)\right) \\ = \frac {1}{2} \operatorname {d i v} \left(p _ {t} (u) \Sigma_ {\varepsilon} ^ {2} \mathbf {s} _ {t} (u)\right) \\ = \frac {1}{2} \left(p _ {t} (u) \operatorname {d i v} \left(\Sigma_ {\varepsilon} ^ {2} \mathbf {s} _ {t} (u)\right) + \left(\Sigma_ {\varepsilon} ^ {2} \mathbf {s} _ {t} (u)\right) ^ {\top} \nabla p _ {t} (u)\right) \\ = \frac {1}{2} p _ {t} (u) \left(\operatorname {d i v} \left(\Sigma_ {\varepsilon} ^ {2} \mathfrak {s} _ {t} (u)\right) + \mathfrak {s} _ {t} (u) ^ {\top} \Sigma_ {\varepsilon} ^ {2} \mathfrak {s} _ {t} (u)\right). \\ \end{array} +$$ + +Hence, dividing (57) by $p_t$ yields + +$$ +\partial_ {t} \log p _ {t} (u) = - \operatorname {T r} (A) - (A u) ^ {\top} \mathbf {s} _ {t} (u) + \frac {1}{2} \left[ \left(\operatorname {d i v} \left(\Sigma_ {\varepsilon} ^ {2} \mathbf {s} _ {t} (u)\right) + \mathbf {s} _ {t} (u) ^ {\top} \Sigma_ {\varepsilon} ^ {2} \mathbf {s} _ {t} (u)\right) \right], +$$ + +so that, + +$$ +\partial_ {t} \log p _ {T - t} (u) = \operatorname {T r} (A) + (A u) ^ {\top} \mathbf {s} _ {T - t} (u) - \frac {1}{2} \left[ \left(\operatorname {d i v} \left(\Sigma_ {\varepsilon} ^ {2} \mathbf {s} _ {T - t} (u)\right) + \mathbf {s} _ {T - t} (u) ^ {\top} \Sigma_ {\varepsilon} ^ {2} \mathbf {s} _ {T - t} (u)\right) \right]. +$$ + +Recall that the backward process can be written as + +$$ +\mathrm {d} \overleftarrow {\mathbf {U}} _ {t} = (- A \overleftarrow {\mathbf {U}} _ {t} + \Sigma_ {\varepsilon} ^ {2} \mathbf {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t})) \mathrm {d} t + \Sigma_ {\varepsilon} \mathrm {d} B _ {t}. +$$ + +Hence, by Ito's formula, + +$$ +\begin{array}{l} \mathrm {d} \left(\mathfrak {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t})\right) = \partial_ {t} \mathfrak {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \mathrm {d} t + \nabla \mathfrak {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \mathrm {d} U _ {t} + \frac {1}{2} \operatorname {T r} \left(\Sigma_ {\varepsilon} ^ {2} \nabla^ {2} \mathfrak {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t})\right) \mathrm {d} t \\ = \nabla \partial_ {t} \log p _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \mathrm {d} t - \nabla \mathsf {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) A \overleftarrow {\mathbf {U}} _ {t} \mathrm {d} t + \nabla \mathsf {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \Sigma_ {\varepsilon} ^ {2} \mathsf {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \mathrm {d} t \\ + \frac {1}{2} \operatorname {T r} \left(\Sigma_ {\varepsilon} ^ {2} \nabla^ {2} \mathbf {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t})\right) d t + \nabla \mathbf {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \Sigma_ {\varepsilon} d B _ {t} \\ = \nabla \left(\partial_ {t} \log p _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) + \frac {1}{2} \mathfrak {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) ^ {\top} \Sigma_ {\varepsilon} ^ {2} \mathfrak {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) + \frac {1}{2} \mathrm {d i v} \left(\Sigma_ {\varepsilon} ^ {2} \mathfrak {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t})\right)\right) d t \\ - \nabla \mathfrak {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) A \overleftarrow {\mathbf {U}} _ {t} d t + \nabla \mathfrak {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \Sigma_ {\varepsilon} d B _ {t} \\ = A ^ {\top} \mathsf {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \mathrm {d} t + \nabla \mathsf {s} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \Sigma_ {\varepsilon} \mathrm {d} B _ {t}, \\ \end{array} +$$ + +which completes the proof and where we used that for $u \in \mathbb{R}^{2d}$ , $2\nabla^2\log p_t(u)\Sigma_\varepsilon^2\mathsf{s}_t(u) = \nabla (\mathsf{s}_t(u)^\top \Sigma_\varepsilon^2\mathsf{s}_t(u))$ and $\nabla \mathrm{div}(\Sigma_{\varepsilon}^{2}\mathsf{s}_{t}(u)) = \nabla \mathrm{Tr}(\Sigma_{\varepsilon}^{2}\nabla^{2}\mathsf{s}_{t}(u))$ . Indeed, for $k \in \{1,\dots,2d\}$ , with $g(u) = \nabla \log p_t(u)$ , and therefore $g_i(u) = \frac{\partial}{\partial u_i} g(u)$ + +$$ +\begin{array}{l} \frac {\partial}{\partial u _ {k}} \left(\nabla g (u) ^ {\top} \Sigma_ {\varepsilon} ^ {2} \nabla g (u)\right) = \frac {\partial}{\partial u _ {k}} \sum_ {i, j} g _ {i} (u) \Sigma_ {\varepsilon , i j} ^ {2} g _ {j} (u) \\ = \sum_ {i, j} \Sigma_ {\varepsilon , i j} ^ {2} \left(g _ {j} (u) \frac {\partial}{\partial u _ {k}} g _ {i} (u) + g _ {i} (u) \frac {\partial}{\partial u _ {k}} g _ {j} (u)\right) \\ = 2 \sum_ {i = 1} ^ {2 d} \Sigma_ {\varepsilon , i i} ^ {2} \left(g _ {i} (u) \frac {\partial}{\partial u _ {k}} g _ {i} (u)\right) \\ = 2 \sum_ {i = 1} ^ {2 d} \Sigma_ {\varepsilon , i i} ^ {2} \left(\frac {\partial}{\partial u _ {i}} g (u) \frac {\partial}{\partial u _ {k}} \frac {\partial}{\partial u _ {i}} g (u)\right) \\ = \left[ 2 \nabla^ {2} g (u) \Sigma_ {\varepsilon} ^ {2} \nabla g (u) \right] _ {k}. \\ \end{array} +$$ + +![](images/9a6552ee50dc23007679e87a8c27faa18af7638f9d4060a3b06b3f64745da510.jpg) + +Lemma D.7. Assume that $(\overleftarrow{\mathbf{U}}_t)_{t\in [0,T]}$ is solution to the backward SDE associated with (9). Then, + +$$ +\mathrm {d} \left(\tilde {\mathbf {s}} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right)\right) = - \tilde {A} _ {\epsilon} ^ {\top} \tilde {\mathbf {s}} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) \mathrm {d} t + \nabla^ {2} \log \tilde {p} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) \Sigma_ {\epsilon} \mathrm {d} B _ {t}. \tag {58} +$$ + +Proof. Recall that $p_{\infty}$ is the stationary distribution of (4) so that using Fokker-Planck equation we get, for $u \in \mathbb{R}^{2d}$ , + +$$ +\begin{array}{l} 0 = - \operatorname {T r} (A) - (A u) ^ {\top} \nabla \log p _ {\infty} (u) \\ + \frac {1}{2} \left[ \operatorname {d i v} \left(\Sigma^ {2} \nabla \log p _ {\infty} (u)\right) + \nabla \log p _ {\infty} (u) ^ {\top} \Sigma^ {2} \nabla \log p _ {\infty} (u) \right]. \\ \end{array} +$$ + +Using that $\tilde{p}_t = p_t / p_\infty$ , and Fokker-Planck as in Lemma D.6 + +$$ +\begin{array}{l} \partial_ {t} \log \tilde {p} _ {t} (u) = - (A u) ^ {\top} \tilde {\mathfrak {s}} _ {t} (u) \\ + \frac {1}{2} \left[ \operatorname {d i v} \left(\Sigma^ {2} \tilde {\mathbf {s}} _ {t} (u)\right) + \tilde {\mathbf {s}} _ {t} (u) ^ {\top} \Sigma^ {2} \tilde {\mathbf {s}} _ {t} (u) \right] \\ + \tilde {\mathfrak {s}} _ {t} (u) ^ {\top} \Sigma^ {2} \nabla \log p _ {\infty} (u). \\ \end{array} +$$ + +Using the definition of $\tilde{A}_{\epsilon}$ , we have, + +$$ +\partial_ {t} \log \tilde {p} _ {t} (u) = (\tilde {A} _ {\epsilon} u) ^ {\top} \tilde {\mathfrak {s}} _ {t} (u) + \frac {1}{2} \left[ \operatorname {d i v} \left(\Sigma^ {2} \tilde {\mathfrak {s}} _ {t} (u)\right) + \tilde {\mathfrak {s}} _ {t} (u) ^ {\top} \Sigma^ {2} \tilde {\mathfrak {s}} _ {t} (u) \right], +$$ + +and therefore, + +$$ +\partial_ {t} \log \tilde {p} _ {T - t} (u) = - (\tilde {A} _ {\epsilon} u) ^ {\top} \tilde {\mathbf {s}} _ {T - t} (u) - \frac {1}{2} \left[ \operatorname {d i v} \left(\Sigma^ {2} \tilde {\mathbf {s}} _ {T - t} (u)\right) + \tilde {\mathbf {s}} _ {T - t} (u) ^ {\top} \Sigma^ {2} \tilde {\mathbf {s}} _ {T - t} (u) \right]. +$$ + +Recall that the modified backward process can be written as + +$$ +\mathrm {d} \overleftarrow {\mathbf {U}} _ {t} = (\tilde {A} _ {\epsilon} \overleftarrow {\mathbf {U}} _ {t} + \Sigma^ {2} \tilde {\mathbf {s}} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t})) \mathrm {d} t + \Sigma_ {\epsilon} d B _ {t}. +$$ + +Hence, by Ito's formula, + +$$ +\begin{array}{l} \mathrm {d} \left(\tilde {\mathbf {s}} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t})\right) \\ = \partial_ {t} \tilde {\mathbf {s}} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \mathrm {d} t + \nabla^ {2} \log \tilde {p} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) d \overleftarrow {\mathbf {U}} _ {t} + \frac {1}{2} \operatorname {T r} \left(\Sigma^ {2} \nabla^ {2} \tilde {\mathbf {s}} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t})\right) \mathrm {d} t \\ = \nabla \left(\partial_ {t} \log \tilde {p} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \mathrm {d} t + \frac {1}{2} \tilde {\mathbf {s}} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) ^ {\top} \Sigma^ {2} \tilde {\mathbf {s}} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) + \frac {1}{2} \operatorname {d i v} \left(\Sigma^ {2} \tilde {\mathbf {s}} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t})\right)\right) \\ + \nabla^ {2} \log \tilde {p} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \tilde {A} _ {\epsilon} \overleftarrow {\mathbf {U}} _ {t} d t + \nabla^ {2} \log \tilde {p} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \Sigma_ {\epsilon} d B _ {t} \\ = - \tilde {A} _ {\epsilon} ^ {\top} \tilde {\mathbf {s}} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) + \nabla^ {2} \log \tilde {p} _ {T - t} (\overleftarrow {\mathbf {U}} _ {t}) \Sigma_ {\epsilon} \mathrm {d} B _ {t}, \\ \end{array} +$$ + +which completes the proof. + +Lemma D.8. Let $\Delta$ be an arbitrary fixed positive constant, and assume that $(\overleftarrow{\mathbf{U}}_t)_{t\in [0,T - \Delta ]}$ is the solution to (11). Then, there exists a universal constant $C > 0$ such that + +$$ +\mathbb {E} \left[ \left\| \nabla \log \tilde {p} _ {T - t} \left(\overset {\prime} {\mathbf {U}} _ {t}\right) - \nabla \log \tilde {p} _ {T - t _ {k}} \left(\overset {\prime} {\mathbf {U}} _ {t _ {k}}\right) \right\| ^ {2} \right] \leq C (g (t _ {k + 1}) - g (t _ {k})) , +$$ + +for $t\in [t_k,t_{k + 1}]$ ,with + +$$ +g (t) := \mathbb {E} \left[ \left\| \nabla \log \tilde {p} _ {T - t} \left(\overleftarrow {\mathbf {U}} _ {t}\right) \right\| ^ {2} \right]. \tag {59} +$$ + +Proof. The argument follows from an adaptation of Conforti et al. (Proposition 3.2, 2025) to our setting. Let $Y_{t} \coloneqq \nabla \log \tilde{p}_{T - t}(\overleftarrow{\mathbf{U}}_{t})$ and $Z_{t} \coloneqq \nabla^{2}\log \tilde{p}_{T - t}(\overleftarrow{\mathbf{U}}_{t})$ . From (58), the process $(Y_{t})_{t\in [0,T]}$ satisfies + +$$ +\mathrm {d} Y _ {t} = - \tilde {A} _ {\epsilon} ^ {\top} Y _ {t} \mathrm {d} t + Z _ {t} \Sigma_ {\epsilon} \mathrm {d} B _ {t}. +$$ + +Applying Ito's formula to $\| Y_t \|^2$ yields + +$$ +\mathrm {d} \left\| Y _ {t} \right\| ^ {2} = - 2 \left\langle Y _ {t}, \tilde {A} _ {\epsilon} ^ {\top} Y _ {t} \right\rangle \mathrm {d} t + 2 \left\langle Y _ {t}, Z _ {t} \Sigma_ {\epsilon} \mathrm {d} B _ {t} \right\rangle + \left\| Z _ {t} \Sigma_ {\epsilon} \right\| _ {\mathrm {F r}} ^ {2} \mathrm {d} t. +$$ + +Therefore, there exists a constant $c > 0$ , depending only on $a$ , such that + +$$ +\mathrm {d} \| Y _ {t} \| ^ {2} \geq c \left(\| Y _ {t} \| ^ {2} + \| Z _ {t} \Sigma_ {\epsilon} \| _ {\mathrm {F r}} ^ {2}\right) \mathrm {d} t + \tilde {H} _ {t} \mathrm {d} B _ {t}, +$$ + +where $\tilde{H}_t$ denotes a stochastic process. Moreover, following the argument of Conforti et al. (Lemma 3.3, 2025), the stochastic integral $\int_0^t\tilde{H}_r\mathrm{d}B_r$ is a true martingale. Using this and integrating over $[t_k,t]$ , we deduce that there exists a universal constant $C > 0$ (whose value may change in the course of the argument) such that + +$$ +\mathbb {E} \left[ \| Y _ {t} - Y _ {t _ {k}} \| ^ {2} \right] \leq C \int_ {t _ {k}} ^ {t _ {k + 1}} \mathbb {E} \left[ \| Y _ {s} \| ^ {2} + \| Z _ {s} \Sigma_ {\epsilon} \| _ {\mathrm {F r}} ^ {2} \right] \mathrm {d} s \leq C \big (g (t _ {k + 1}) - g (t _ {k}) \big). +$$ + +![](images/ed5b18d20a8765bced0dfe65c9a94aeff0e4aa3fb75237f73b8c4193bb760151.jpg) + +Lemma D.9. Let $A \in \mathbb{R}^{n \times n}$ be an invertible matrix, and let $B \in \mathbb{R}^{n \times n}$ be such that $A - B$ is also invertible. Then, + +$$ +(A - B) ^ {- 1} - A ^ {- 1} = (A - B) ^ {- 1} B A ^ {- 1}. +$$ + +Proof. Note that + +$$ +(A - B) ^ {- 1} - A ^ {- 1} = (A - B) ^ {- 1} A A ^ {- 1} - A ^ {- 1} = \left[ (A - B) ^ {- 1} A - \mathbf {I} _ {n} \right] A ^ {- 1}, +$$ + +and + +$$ +(A - B) ^ {- 1} A = (A - B) ^ {- 1} \left((A - B) + B\right) = \mathbf {I} _ {n} + (A - B) ^ {- 1} B, +$$ + +so that + +$$ +\left[ (A - B) ^ {- 1} A - \mathbf {I} _ {n} \right] A ^ {- 1} = (A - B) ^ {- 1} B A ^ {- 1}, +$$ + +which completes the proof. + +![](images/cdff1e3edf825294c928f994577e048a97da74c5ea06cd6338ee759738685b57.jpg) + +# E Numerical Illustration + +This section provides additional details on the numerical implementation described in Section 4. + +# E.1 CLD training and sampling + +Algorithms 1 and 2 show the training and sampling procedures for the CLD-based approaches, respectively. + +Algorithm 1 CLD Training +Require: Dataset $\mathcal{D}$ , batch size $B$ network $s_{\theta}(\cdot ,t)$ , a positive weight function $\lambda :[0,T]\to \mathbb{R}_{+}$ and $\epsilon \geq 0$ +1: Precompute $\tilde{\Sigma}_{0,t} = \Sigma_{0,t} + e^{tA}\mathrm{diag}(0\mathbf{I}_d,v^2\mathbf{I}_d)(e^{tA})^\top$ . $\triangleright$ The value of $\Sigma_{0,t}$ depends on $\epsilon$ , see Lemma A.2. (eq 23). +2: while not converged do +3: Sample $\{x^{(i)}\}_{i = 1}^{B}\sim \mathcal{D}$ +4: Sample $\{t^{(i)}\}_{i = 1}^{B}\sim \mathcal{U}([0,T])$ +5: Sample $\{\varepsilon^{(i)}\}_{i = 1}^{B}\sim \mathcal{N}(0,\mathbf{I}_{2d})$ +6: $\overrightarrow{\mathbf{U}}_{t^{(i)}} = e^{t^{(i)}A}\left(\overrightarrow{X}_0,0_d\right)^\top +(\tilde{\Sigma}_{0,t^{(i)}})^{1 / 2}\varepsilon^{(i)}$ +7: $\mathcal{L}\gets \frac{1}{B}\sum_{i = 1}^{B}\lambda (t^{(i)})\left\| s_{\theta}\left(t^{(i)},\overrightarrow{\mathbf{U}}_{t^{(i)}}\right) + (\tilde{\Sigma}_{0,t^{(i)}})^{-1 / 2}\varepsilon^{(i)}\right\| ^2$ +8: Update $\theta$ by taking gradient step on $\nabla_{\theta}\mathcal{L}$ +9: end while + +Algorithm 2 CLD Sampling +Require: Learned network $s_\theta$ , number of discretization steps $N$ and $\epsilon \geq 0$ . +1: $h \gets T / N$ +2: $\bar{\mathbf{U}}_0 \sim \pi_{\infty}$ +3: for $k = 0$ down to $N - 1$ do +4: $t_k \gets kh$ +5: Sample $Z_k \sim \mathcal{N}(0, \mathbf{I}_{2d})$ +6: $\bar{\mathbf{U}}_{t_{k + 1}}^{\theta} = \bar{\mathbf{U}}_{t_k}^{\theta} + h\left(\tilde{A}_\epsilon \bar{\mathbf{U}}_{t_k}^{\theta} + \Sigma_\epsilon^2 s_\theta (t_k,\bar{\mathbf{U}}_{t_k}^{\theta})\right) + \sqrt{h}\Sigma_\epsilon Z_k$ +7: end for +8: return First $d$ coordinates of $\bar{\mathbf{U}}_{t_N}^{\theta}$ +9: Return position only, discard velocity. + +# E.2 Time-rescaling of the forward SDE + +Following Dockhorn et al. (2022), one often implements in practice a time-rescaled version of + +$$ +\mathrm {d} \overrightarrow {\mathbf {U}} _ {t} = A \overrightarrow {\mathbf {U}} _ {t} \mathrm {d} t + \Sigma_ {\epsilon} \mathrm {d} B _ {t}, +$$ + +by introducing a positive noise schedule $\beta \colon [0,1] \to [0,\infty)$ and setting + +$$ +\tilde {\overrightarrow {\mathbf {U}}} _ {t} = \overrightarrow {\mathbf {U}} _ {\tau (t)} \quad \mathrm {a n d} \quad \tau (t) = \int_ {0} ^ {t} \beta (s) \mathrm {d} s. +$$ + +Equivalently, $\tilde{\vec{\mathbf{U}}}_t$ satisfies the inhomogeneous SDE + +$$ +\mathrm {d} \tilde {\vec {\mathbf {U}}} _ {t} = \underbrace {\beta (t) A} _ {= \tilde {A} (t)} \tilde {\vec {\mathbf {U}}} _ {t} \mathrm {d} t + \underbrace {\sqrt {\beta (t)} \Sigma_ {\epsilon}} _ {= \tilde {\Sigma} _ {\epsilon} (t)} \mathrm {d} B _ {t}, +$$ + +In the critically-damped example (Equation (4)), we have + +$$ +\tilde {A} (t) = \left( \begin{array}{c c} 0 & \beta (t) a ^ {2} \\ - \beta (t) & - 2 a \beta (t) \end{array} \right) \otimes \mathbf {I} _ {d}, \quad \tilde {\Sigma} _ {\epsilon} (t) = \sqrt {\beta (t)} \Sigma_ {\epsilon} \otimes \mathbf {I} _ {d}. +$$ + +Mean factor. Since $\vec{\overrightarrow{\mathbf{U}}}_t = \vec{\overrightarrow{\mathbf{U}}}_{\tau (t)}$ , we can deduce from the homogeneous solution the mean factor, + +$$ +\mathbb {E} \left[ \overrightarrow {\mathbf {U}} _ {t} \mid \overrightarrow {\mathbf {U}} _ {0} \right] = e ^ {- a \tau (t)} \left(\left( \begin{array}{c c} 1 + a \tau (t) & a ^ {2} \tau (t) \\ - \tau (t) & 1 - a \tau (t) \end{array} \right) \otimes \mathbf {I} _ {d}\right) \overrightarrow {\mathbf {U}} _ {0}. +$$ + +Covariance. Again by the time-change $\tau(t)$ , one has + +$$ +\operatorname {C o v} \left(\vec {\widetilde {\mathbf {U}}} _ {t} \mid \vec {\mathbf {U}} _ {0}\right) = \operatorname {C o v} \left(\vec {\mathbf {U}} _ {\tau (t)} \mid \vec {\mathbf {U}} _ {0}\right) = \int_ {0} ^ {\tau (t)} e ^ {s A} \Sigma_ {\epsilon} \Sigma_ {\epsilon} ^ {T} e ^ {s A ^ {T}} d s. +$$ + +Affine schedule. A popular and simple choice of noise schedule is an affine noise schedule given by + +$$ +\beta (t) = \beta_ {1} t + \beta_ {0}, \quad \tau (t) = \frac {\beta_ {1}}{2} t ^ {2} + \beta_ {0} t. +$$ + +# E.3 Score approximation + +Denoising Score Matching (DSM). Recall that the conditional score function of the forward process (4) given the initial data distribution is Gaussian, + +$$ +\nabla \log p _ {t} (\overrightarrow {\mathbf {U}} _ {t} | \overrightarrow {\mathbf {U}} _ {0}) = - \Sigma_ {0, t} ^ {- 1} \left(\overrightarrow {\mathbf {U}} _ {t} - e ^ {t A} \overrightarrow {\mathbf {U}} _ {0}\right). +$$ + +Hence, following Vincent (2011) the conditional denoising score matching loss $\mathcal{L}_{\mathrm{cond}}$ , for $\theta \in \Theta$ , $s_{\theta}(t,x):[0,T]\times \mathbb{R}^{2d}\mapsto \mathbb{R}^{2d}$ and $Z_{2d}\sim \mathcal{N}(0,\mathbf{I}_{2d})$ can be written as + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {D S M}} (\theta) = \mathbb {E} \left[ \lambda (t) \left\| s _ {\theta} \left(\tau , \overrightarrow {\mathbf {U}} _ {\tau}\right) - \nabla \log p _ {\tau} \left(\overrightarrow {\mathbf {U}} _ {\tau} | \overrightarrow {\mathbf {U}} _ {0}\right) \right\| ^ {2} \right] \\ = \mathbb {E} \left[ \lambda (t) \left\| s _ {\theta} \left(\tau , \mathrm {e} ^ {\tau A} \overrightarrow {\mathbf {U}} _ {0} + \sqrt {\Sigma_ {0 , \tau}} Z _ {2 d}\right) + \Sigma_ {0, t} ^ {- 1 / 2} Z _ {2 d} \right\| ^ {2} \right], \\ \end{array} +$$ + +where $\tau \sim \mathcal{U}[0,T],\tau \perp Z_{2d}$ and $\lambda :[0,T]\mapsto \mathbb{R}_{>0}$ + +Hybrid Score Matching (HSM). It has been shown in Dockhorn et al. (2022) that another loss, potentially more stable numerically can be obtained by conditioning only on $\vec{X}_0$ rather than on the full state $\vec{\mathbf{U}}_0 = (\vec{X}_0,\vec{V}_0)^\top$ . This hybrid score matching loss can be derived by marginalizing out the velocity component $\vec{V}_0\sim \mathcal{N}\left(0_d,v^2\mathbf{I}_d\right),\vec{V}_0\perp \vec{X}_0$ in the conditional score function, + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {H S M}} (\theta) = \mathbb {E} \left[ \lambda (t) \left\| s _ {\theta} (\tau , \overrightarrow {\mathbf {U}} _ {\tau}) - \nabla \log p _ {\tau} (\overrightarrow {\mathbf {U}} _ {\tau} \mid \overrightarrow {X} _ {0}) \right\| ^ {2} \right] \\ = \mathbb {E} \left[ \lambda (t) \left\| s _ {\theta} \left(\tau , e ^ {\tau A} \left(\overrightarrow {X} _ {0} 0 _ {d}\right) + \sqrt {\Sigma_ {0 , \tau} ^ {\prime}} Z _ {2 d}\right) + (\Sigma_ {0, \tau} ^ {\prime}) ^ {- 1 / 2} Z _ {2 d} \right\| ^ {2} \right], \\ \end{array} +$$ + +with $Z_{2d}\sim \mathcal{N}(0,\mathbf{I}_{2d})$ independent of $\tau \sim \mathcal{U}[0,T]$ and + +$$ +\Sigma_ {0, \tau} ^ {\prime} = \Sigma_ {0, \tau} + e ^ {\tau A} \left( \begin{array}{c c} 0 & 0 \\ 0 & v ^ {2} \mathbf {I} _ {d} \end{array} \right) (e ^ {\tau A}) ^ {\top}. +$$ + +# E.4 Neural network architectures + +In Figure 3 we detail the neural network used in the illustration. The input layer is composed of a vector $x$ in dimension $2d$ and the time $t$ . Both are respectively embedded using a linear transformation or a sine/cosine transformation (Nichol and Dhariwal, 2021) of width mid_features. Then, 3 dense layers of constant width mid_features followed by SiLu activations and skip connections regarding the time embedding. The output layer is linear resulting in a vector of dimension $d$ (when $\varepsilon = 0$ ) and $2d$ (when $\varepsilon \neq 0$ ). + +![](images/d7ba53b7a8df6e404d92ce9e4e05fd61993533e1883d1e900f691dc7b8c9c23b.jpg) +Figure 3: Neural network architecture. + +# E.5 Additional experiments + +We present additional experimental results for the MG25 distribution in dimension 100 and the 2D-diamond dataset. The MG25 distribution is defined as a Gaussian mixture model with 25 modes in dimension 100, defined as + +$$ +\pi_ {\mathrm {d a t a}} (x) = \frac {1}{2 5} \sum_ {(j, k) \in \{- 2, \dots , 2 \} ^ {2}} \varphi_ {\mu_ {j k}, \Sigma_ {d}} (x) +$$ + +with $\varphi_{\mu_{jk},\Sigma_d}$ denoting the probability density function of the Gaussian distribution with covariance matrix $\Sigma_d = \mathrm{diag}(0.01,0.01,0.1,\dots,0.1)$ and mean vector $\mu_{jk} = [j,k,0,0,0\dots ,0]^\top$ . This dataset has been previously used in Thin et al. (2021); Strasman et al. (2025). The 2D-diamond distribution is a two-dimensional dataset with well-separated modes, used as a synthetic dataset in Dockhorn et al. (2022). + +Tables 1, 2 and 3 report the sliced-Wasserstein error for different values of the regularization parameter $\varepsilon \in \{0,0.1,0.25,0.5,1\}$ and drift coefficient $a \in \{0.1,0.25,0.5,1,2\}$ , using the same experimental setup as for the Funnel dataset described in Section 4. Both tables 1 and 2 highlight the improvement in generation quality achieved with smaller regularization values of $\varepsilon$ . Table 3 report the values displayed in Figure 1 with the associated standard deviations. + +Table 1: Comparison of mean Wasserstein distance for different noise levels $\varepsilon$ on the MG25-100D (mean $\pm$ standard deviation across 5 runs; lower is better). + +
εa=0.1a=0.25a=0.5a=1.0a=2.0
00.284 ± 0.0020.199 ± 0.0010.034 ± 0.0020.009 ± 0.0010.009 ± 0.001
0.10.192 ± 0.0010.159 ± 0.0010.026 ± 0.0010.005 ± 0.0010.008 ± 0.001
0.250.013 ± 0.0010.065 ± 0.0010.015 ± 0.0010.007 ± 0.0010.007 ± 0.001
0.50.191 ± 0.0070.004 ± 0.0010.009 ± 0.0010.008 ± 0.0010.008 ± 0.001
10.389 ± 0.0300.045 ± 0.0030.011 ± 0.0020.006 ± 0.0010.008 ± 0.001
+ +Table 2: Comparison of mean Wasserstein distance for different noise levels $\varepsilon$ on the Diamond-2D (mean $\pm$ standard deviation across 5 runs; lower is better). + +
εa=0.1a=0.25a=0.5a=1.0a=2.0
00.322 ± 0.0010.256 ± 0.0040.039 ± 0.0020.007 ± 0.0010.007 ± 0.002
0.10.234 ± 0.0010.198 ± 0.0030.026 ± 0.0040.004 ± 0.0010.005 ± 0.001
0.250.048 ± 0.0010.074 ± 0.0030.021 ± 0.0020.004 ± 0.0010.005 ± 0.001
0.50.073 ± 0.0020.008 ± 0.0010.008 ± 0.0020.006 ± 0.0020.006 ± 0.002
10.095 ± 0.0020.029 ± 0.0020.014 ± 0.0010.013 ± 0.0010.011 ± 0.001
+ +Table 3: Comparison of mean Wasserstein distance for different noise levels $\varepsilon$ on the Funnel-100D (mean $\pm$ standard deviation across 5 runs; lower is better). + +
εa=0.1a=0.25a=0.5a=1.0a=2.0
00.991 ± 0.0010.73 ± 0.0020.291 ± 0.0050.225 ± 0.0560.223 ± 0.011
0.10.705 ± 0.0010.632 ± 0.0020.278 ± 0.0010.158 ± 0.0270.198 ± 0.004
0.250.277 ± 0.0020.409 ± 0.0030.248 ± 0.0120.137 ± 0.0050.179 ± 0.006
0.51.171 ± 0.0150.248 ± 0.0020.228 ± 0.0050.157 ± 0.0020.203 ± 0.003
12.885 ± 0.0160.785 ± 0.0110.191 ± 0.0080.253 ± 0.0060.233 ± 0.002
+ +# NeurIPS Paper Checklist + +# 1. Claims + +Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? + +Answer: [Yes] + +Justification: The abstract summarizes the key contributions of the paper, clarifying their position relative to existing work and highlighting improvements over the state of the art. + +Guidelines: + +- The answer NA means that the abstract and introduction do not include the claims made in the paper. +- The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. +- The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. +- It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. + +# 2. Limitations + +Question: Does the paper discuss the limitations of the work performed by the authors? + +Answer: [Yes] + +Justification: The authors are transparent about the nature of the assumptions and mathematical models they study, and consistently highlight these choices as part of the scope and limitations of their work. + +Guidelines: + +- The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. +- The authors are encouraged to create a separate "Limitations" section in their paper. +- The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. +- The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. +- The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. +- The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. +- If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. +- While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. + +# 3. Theory assumptions and proofs + +Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? + +# Answer: [Yes] + +Justification: Each theoretical result contains references to assumptions clearly defined in the main document. The rigorous proofs are also provided in the supplementary material. + +# Guidelines: + +- The answer NA means that the paper does not include theoretical results. +- All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced. +- All assumptions should be clearly stated or referenced in the statement of any theorems. +- The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. +- Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. +- Theorems and Lemmas that the proof relies upon should be properly referenced. + +# 4. Experimental result reproducibility + +Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? + +# Answer: [Yes] + +Justification: All the numerical experiments are well documented either in the main body text or on the appendices. The contributions are primarily theoretical and methodological, and are supported by numerical experiments. + +# Guidelines: + +- The answer NA means that the paper does not include experiments. +- If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. +- If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. +- Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. +- While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example +(a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. +(b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. +(c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). +(d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. + +# 5. Open access to data and code + +Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? + +Answer: [Yes] + +Justification: The paper will be accompanied of a public github repository with scripts and notebooks to reproduce the numerical experiments. For the submission, the codes are provided as a supplementary zip file. + +Guidelines: + +- The answer NA means that paper does not include experiments requiring code. +- Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details. +- While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). +- The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details. +- The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. +- The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. +- At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). +- Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. + +# 6. Experimental setting/details + +Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? + +Answer: [Yes] + +Justification: We refer to the appendices for exhaustive implementation details. + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. +- The full details can be provided either with the code, in appendix, or as supplemental material. + +# 7. Experiment statistical significance + +Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? + +Answer: [Yes] + +Justification: All the numerical experiments are run several times, so that the performance metrics are given as averages over the different runs, with associated standard deviations. This information is available for instance graphically with confidence regions. + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. + +- The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). +- The method for calculating the error bars should be explained (closed form formula call to a library function, bootstrap, etc.) +- The assumptions made should be given (e.g., Normally distributed errors). +- It should be clear whether the error bar is the standard deviation or the standard error of the mean. +- It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a $96\%$ CI, if the hypothesis of Normality of errors is not verified. +- For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). +- If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. + +# 8. Experiments compute resources + +Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? + +Answer: [Yes] + +Justification: Although the paper is theoretical, it provides information about the computational resources used in all experiments, including the types of compute workers (CPU or GPU). + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- The paper should indicate the type of compute workers CPU or GPU, internal cluster or cloud provider, including relevant memory and storage. +- The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. +- The paper should disclose whether the full research project required more computer than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). + +# 9. Code of ethics + +Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines? + +Answer: [Yes] + +Justification: This research does not involve any human subjects or participants and contains no (sensitive) data. + +Guidelines: + +The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics +- If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. +- The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). + +# 10. Broader impacts + +Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? + +Answer: [No] + +Justification: This paper is purely methodological/theoretical. Results are established for pre-existing algorithms and their variants. Thus, there is no specific positive/negative societal impacts of this research. + +# Guidelines: + +- The answer NA means that there is no societal impact of the work performed. +- If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. +- Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. +- The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. +- The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. +- If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). + +# 11. Safeguards + +Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? + +Answer: [NA] + +Justification: The paper is theoretical and does not involve the release of data or models that have a high risk for misuse. Therefore, no safeguards were necessary. + +# Guidelines: + +- The answer NA means that the paper poses no such risks. +- Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. +- Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. +- We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. + +# 12. Licenses for existing assets + +Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? + +Answer: [NA] + +Justification: No existing assets were used, so no credits or licenses apply. + +# Guidelines: + +- The answer NA means that the paper does not use existing assets. +- The authors should cite the original paper that produced the code package or dataset. +- The authors should state which version of the asset is used and, if possible, include a URL. +- The name of the license (e.g., CC-BY 4.0) should be included for each asset. + +- For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. +- If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. +- For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. +- If this information is not available online, the authors are encouraged to reach out to the asset's creators. + +# 13. New assets + +Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? + +Answer: [Yes] + +Justification: The code is provided and well-documented. + +Guidelines: + +- The answer NA means that the paper does not release new assets. +- Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. +- The paper should discuss whether and how consent was obtained from people whose asset is used. +- At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. + +# 14. Crowdsourcing and research with human subjects + +Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? + +Answer: [NA] + +Justification: Not applicable. + +Guidelines: + +- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. +- Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. +- According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. + +# 15. Institutional review board (IRB) approvals or equivalent for research with human subjects + +Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? + +Answer: [NA] + +Justification: Not applicable + +Guidelines: + +- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. + +- We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. +- For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. + +# 16. Declaration of LLM usage + +Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required. + +Answer: [NA] + +Justification: We did not employ any LLMs during any stage of the research process. + +Guidelines: + +- The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components. +- Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described. \ No newline at end of file diff --git a/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/images.zip b/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..bd80406ce3b1feb72fa23f423b2de591ec66850f --- /dev/null +++ b/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6fe05956f4b0d8f7ede8c782ffa7150c58683af61dbb79daeeec5c67a7706771 +size 3151630 diff --git a/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/layout.json b/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d04b5ccfb14be54a266ca65448e054241064732b --- /dev/null +++ b/NeurIPS/2025/Wasserstein Convergence of Critically Damped Langevin Diffusions/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f64eb1947955ddd9338dd35ce3de298774aec3b9485f2655ee6106484709b7c +size 2165647 diff --git a/NeurIPS/2025/Wasserstein Transfer Learning/63f6d0c7-3853-4ed7-9971-594977c35050_content_list.json b/NeurIPS/2025/Wasserstein Transfer Learning/63f6d0c7-3853-4ed7-9971-594977c35050_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a09dc45f4e690d4161f38cf7bdc5afb274141958 --- /dev/null +++ b/NeurIPS/2025/Wasserstein Transfer Learning/63f6d0c7-3853-4ed7-9971-594977c35050_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e700717506f78ef9bbad6207cb027dc57492edc09f61ede103498bac692f23c +size 241166 diff --git a/NeurIPS/2025/Wasserstein Transfer Learning/63f6d0c7-3853-4ed7-9971-594977c35050_model.json b/NeurIPS/2025/Wasserstein Transfer Learning/63f6d0c7-3853-4ed7-9971-594977c35050_model.json new file mode 100644 index 0000000000000000000000000000000000000000..76d4537b0c86d5bfa5029c86833199bc550170aa --- /dev/null +++ b/NeurIPS/2025/Wasserstein Transfer Learning/63f6d0c7-3853-4ed7-9971-594977c35050_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05c0afb8149419b0d22a00a4deb8e4a44108548f0b162fe18595585a5238ef33 +size 295523 diff --git a/NeurIPS/2025/Wasserstein Transfer Learning/63f6d0c7-3853-4ed7-9971-594977c35050_origin.pdf b/NeurIPS/2025/Wasserstein Transfer Learning/63f6d0c7-3853-4ed7-9971-594977c35050_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f685c922529c6695d814fb3b6a4325f1afd754ca --- /dev/null +++ b/NeurIPS/2025/Wasserstein Transfer Learning/63f6d0c7-3853-4ed7-9971-594977c35050_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dfa95eba7bc5d107127d6c3d372a8c986e63f2a5723b64f196c5959d5dc490e2 +size 621679 diff --git a/NeurIPS/2025/Wasserstein Transfer Learning/full.md b/NeurIPS/2025/Wasserstein Transfer Learning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b7b78fc160383b53f4ae032e42bb66f1d90c0292 --- /dev/null +++ b/NeurIPS/2025/Wasserstein Transfer Learning/full.md @@ -0,0 +1,1419 @@ +# Wasserstein Transfer Learning + +Kaicheng Zhang $^{1*}$ Sinian Zhang $^{2*}$ Doudou Zhou $^{3\dagger}$ Yidong Zhou $^{4\dagger}$ + +$^{1}$ School of Mathematical Sciences, Zhejiang University, China + +2Division of Biostatistics and Health Data Science, University of Minnesota, USA + +$^{3}$ Department of Statistics and Data Science, National University of Singapore, Singapore + +$^{4}$ Department of Statistics, University of California, Davis, USA + +3210102033@zju.edu.cn, zhan9381@umn.edu, + +ddzhou@nus.edu.sg, ydzhou@ucdavis.edu + +# Abstract + +Transfer learning is a powerful paradigm for leveraging knowledge from source domains to enhance learning in a target domain. However, traditional transfer learning approaches often focus on scalar or multivariate data within Euclidean spaces, limiting their applicability to complex data structures such as probability distributions. To address this limitation, we introduce a novel transfer learning framework for regression models whose outputs are probability distributions residing in the Wasserstein space. When the informative subset of transferable source domains is known, we propose an estimator with provable asymptotic convergence rates, quantifying the impact of domain similarity on transfer efficiency. For cases where the informative subset is unknown, we develop a data-driven transfer learning procedure designed to mitigate negative transfer. The proposed methods are supported by rigorous theoretical analysis and are validated through extensive simulations and real-world applications. The code is available at https://github.com/h7nian/WaTL. + +# 1 Introduction + +In recent years, transfer learning [32] has emerged as a powerful paradigm in machine learning, enabling models to leverage knowledge acquired from one domain and apply it to related tasks in another. This approach has proven especially valuable in scenarios where data collection and labeling can be costly, or where tasks exhibit inherent similarities in structure or representation. While early successes focused on conventional data types such as images [30], text [28], and tabular data [38], there is growing interest in extending these methods to more complex data structures. Such data often reside in non-Euclidean spaces and lack basic algebraic operations like addition, subtraction, or scalar multiplication, posing challenges for traditional learning algorithms. A key example is probability distributions [27], where for example the sum of two density functions does not yield a valid density. + +Samples of univariate probability distributions are increasingly encountered across various research domains, such as mortality analysis [14], temperature studies [44], and physical activity monitoring [23], among others [27]. Recently, there has been a growing focus on directly modeling distributions as elements of the Wasserstein space, a geodesic metric space related to optimal transport [36, 24]. The absence of a linear structure in this space motivates the development of specialized transfer learning techniques that respect its intrinsic geometry. + +To address this gap, we introduce Wasserstein Transfer Learning (WaTL), a novel transfer learning framework for regression models where outputs are univariate probability distributions. WaTL + +effectively leverages knowledge from source domains to improve learning in a target domain by intrinsically incorporating the Wasserstein metric, which provides a natural way to measure discrepancies between probability distributions. + +# 1.1 Contributions + +The primary contributions of this work are summarized as follows: + +Methodology. We propose a novel transfer learning framework for regression models with distributional outputs, addressing the challenges inherent in the Wasserstein space, which lacks a conventional linear structure. Our framework includes an efficient algorithm for cases where the informative subset of source domains is known, and a data-driven algorithm for scenarios where the subset is unknown. To the best of our knowledge, this is the first comprehensive transfer learning approach specifically designed for regression models with outputs residing in the Wasserstein space. + +Theoretical analysis. We establish the asymptotic convergence rates for the WaTL algorithm in both the case where the informative set is known and the more challenging scenario where it must be estimated. In the latter case, we also prove that the informative set can be consistently identified. In both settings, we demonstrate that WaTL effectively improves model performance on the target data by leveraging information from the source domain. The proofs rely heavily on empirical process theory and a careful analysis of the covariate structure. Our key theoretical results extend beyond responses lying in the Wasserstein space, offering potential applications to other complex outputs. + +Simulation studies and real-world applications. We evaluate WaTL through simulations and real data applications, demonstrating its effectiveness in improving target model performance by leveraging source domain information. The benefits become more pronounced with larger source sample sizes, underscoring its ability to harness transferable knowledge. + +# 1.2 Related Work + +Transfer learning. Transfer learning aims to improve performance in a target population by leveraging information from a related source population and has seen wide application across domains [e.g., 16, 15, 8, 41, 40]. Recent theoretical developments have focused on regression in Euclidean settings, including high-dimensional linear [21] and generalized linear models [31], nonparametric regression [6, 22], and function mean estimation from discretely sampled data [5]. In parallel, optimal transport has been used to measure distributional shifts for domain adaptation [10, 29]. However, to the best of our knowledge, no existing work has investigated transfer learning in regression models where outputs are probability distributions residing in the Wasserstein space. This represents a significant gap in the literature, highlighting the need for novel methodologies that address this challenging yet important setting. + +Distributional data analysis. The increasing prevalence of data where distributions serve as fundamental units of observation has spurred the development of distributional data analysis [27]. Recent advancements in this field include geodesic principal component analysis in the Wasserstein space [1], autoregressive models for time series of distributions [39, 44], and distribution-on-distribution regression [14, 7]. Leveraging the Wasserstein metric, regression models with distributional outputs and Euclidean inputs can be viewed as a special case of Fréchet regression [26], which extends linear and local linear regression to outputs residing in general metric spaces. In practical scenarios where only finite samples from the unknown distributional output are available, empirical measures have been utilized as substitutes for the unobservable distributions in regression models [43]. + +# 2 Preliminaries + +# 2.1 Notations + +Let $L^2(0,1)$ be the space of square-integrable functions over the interval $(0,1)$ , with the associated $L^2$ norm and metric denoted by $\| \cdot \|_2$ and $d_{L^2}$ , respectively. To be specific, $\| g \|_2 = (\int_0^1 g^2(z) dz)^{1/2}$ and $d_{L^2}(g_1, g_2) = \| g_1 - g_2 \|_2$ . For a vector $Z$ , $\| Z \|$ denotes the Euclidean norm. Given a matrix $\Sigma$ , we define its spectrum as the set of its singular val- + +ues. For a sub-Gaussian random vector $X$ , we define the sub-Gaussian norm as $\| X \|_{\Psi_2} \coloneqq \sup_{\| v \| = 1} \inf \left\{ t > 0 : E(e^{\langle X, v \rangle^2 / t^2}) \leq 2 \right\}$ . + +We write $a_{n} \lesssim b_{n}$ if there exists a positive constant $C$ such that $a_{n} \leq C b_{n}$ when $n$ is large enough and $a_{n} \asymp b_{n}$ if $a_{n} \lesssim b_{n}$ and $b_{n} \lesssim a_{n}$ . The notation $a_{n} = O_{p}(b_{n})$ implies that $P(|a_{n} / b_{n}| \leq C) \to 1$ for some constant $C > 0$ , while $a_{n} = o_{p}(b_{n})$ implies that $P(|a_{n} / b_{n}| > c) \to 0$ for any constant $c > 0$ . Superscripts typically indicate different data sources, while subscripts distinguish individual samples from the same source. + +# 2.2 Wasserstein Space + +Let $\mathcal{W}$ denote the space of probability distributions on $\mathbb{R}$ with finite second moments, equipped with the 2-Wasserstein, or simply Wasserstein, metric. For two distributions $\mu_1, \mu_2 \in \mathcal{W}$ , the Wasserstein metric is given by $d_{\mathcal{W}}^2(\mu_1, \mu_2) = \inf_{\pi \in \Pi(\mu_1, \mu_2)} \int_{\mathbb{R} \times \mathbb{R}} |s - t|^2 d\pi(s, t)$ , where $\Pi(\mu_1, \mu_2)$ denotes the set of all joint distributions with marginals $\mu_1$ and $\mu_2$ [18]. For a probability measure $\mu \in \mathcal{W}$ with cumulative distribution function $F_\mu$ , we define the quantile function $F_\mu^{-1}$ as the left-continuous inverse of $F_\mu$ , such that $F_\mu^{-1}(u) = \inf \{t \in \mathbb{R} | F_\mu(t) \geq u\}$ , for $u \in (0, 1)$ . It has been established [36] that the Wasserstein metric can be expressed as the $L^2$ metric between quantile functions: + +$$ +d _ {\mathcal {W}} ^ {2} \left(\mu_ {1}, \mu_ {2}\right) = \int_ {0} ^ {1} \left\{F _ {\mu_ {1}} ^ {- 1} (u) - F _ {\mu_ {2}} ^ {- 1} (u) \right\} ^ {2} d u. \tag {1} +$$ + +The space $\mathcal{W}$ , endowed with the Wasserstein metric, forms a complete and separable metric space, commonly known as the Wasserstein space [36]. + +Assuming $E\{d_{\mathcal{W}}^2 (\nu ,\mu)\} < \infty$ for all $\mu \in \mathcal{W}$ , the Fréchet mean [13] of a random distribution $\nu \in \mathcal{W}$ is given by $\nu_{\oplus} = \arg \min_{\mu \in \mathcal{W}}E\{d_{\mathcal{W}}^2 (\nu ,\mu)\}$ . Since the Wasserstein space $\mathcal{W}$ is a Hadamard space [20], the Fréchet mean is well-defined and unique. Moreover, from (1), it follows that the quantile function of the Fréchet mean, denoted as $F_{\nu_{\oplus}}^{-1}$ , satisfies $F_{\nu_{\oplus}}^{-1}(u) = E\{F_{\nu}^{-1}(u)\}$ , $u\in (0,1)$ . + +# 2.3 Fréchet Regression + +Consider a random pair $(X,\nu)$ with joint distribution $\mathcal{F}$ on the product space $\mathbb{R}^p\times \mathcal{W}$ . Let $X$ have mean $\theta = E(X)$ and covariance matrix $\Sigma = \operatorname {Var}(X)$ , where $\Sigma$ is assumed to be positive definite. To establish a regression framework for predicting the distributional response $\nu$ from the covariate $X$ , we employ the Fréchet regression model, which extends multiple linear regression and local linear regression to scenarios where responses reside in a metric space [26]. The Fréchet regression function is defined as the conditional Fréchet mean of $\nu$ given $X = x$ , + +$$ +m(x) = \operatorname *{arg min}_{\mu \in \mathcal{W}}E\{d^{2}_{\mathcal{W}}(\nu ,\mu)|X = x\} . +$$ + +For a detailed exposition of Fréchet regression, we refer the reader to [26]. Given $n$ independent realizations $\{(X_i,\nu_i)\}_{i = 1}^n$ , we define the empirical mean and covariance of $X$ as $\overline{X} = \frac{1}{n}\sum_{i = 1}^{n}X_{i}$ and $\widehat{\Sigma} = \frac{1}{n}\sum_{i = 1}^{n}(X_{i} - \overline{X})(X_{i} - \overline{X})^{\mathrm{T}}$ . + +The global Fréchet regression extends classical multiple linear regression and estimates the conditional Fréchet mean as + +$$ +m_{G}(x) = \operatorname *{arg min}_{\mu \in \mathcal{W}}E\{s_{G}(x)d_{\mathcal{W}}^{2}(\nu ,\mu)\} , +$$ + +where the weight function is given by $s_G(x) = 1 + (X - \theta)^{\mathrm{T}}\Sigma^{-1}(x - \theta)$ . The empirical estimator is formulated as + +$$ +\widehat{m}_{G}(x) = \operatorname *{arg min}_{\mu \in \mathcal{W}}\frac{1}{n}\sum_{i = 1}^{n}s_{iG}(x)d_{\mathcal{W}}^{2}(\nu_{i},\mu), +$$ + +where $s_{iG}(x) = 1 + (X_i - \overline{X})^{\mathrm{T}}\hat{\Sigma}^{-1}(x - \overline{X})$ + +Similarly, local Fréchet regression extends classical local linear regression to settings with metric space-valued outputs. In the case of a scalar predictor $X \in \mathbb{R}$ , the local Fréchet regression function is + +$$ +m_{L,h}(x) = \operatorname *{arg min}_{\mu \in \mathcal{W}}E\{s_{L}(x,h)d_{\mathcal{W}}^{2}(\nu ,\mu)\} , +$$ + +where the weight function is $s_L(x,h) = K_h(X - x)\{u_2 - u_1(X - x)\} / \sigma_0^2$ , with $u_j = E\{K_h(X - x)(X - x)^j\}$ , $j = 0,1,2$ , and $\sigma_0^2 = u_0u_2 - u_1^2$ . Here, $K_{h}(\cdot) = h^{-1}K(\cdot /h)$ is a kernel function with bandwidth $h$ . The empirical version is given by + +$$ +\widehat{m}_{L,h}(x) = \operatorname *{arg min}_{\mu \in \mathcal{W}}\frac{1}{n}\sum_{i = 1}^{n}s_{iL}(x,h)d_{\mathcal{W}}^{2}(\nu_{i},\mu), +$$ + +where $s_{iL}(x,h) = K_h(X_i - x)\{\widehat{u}_2 - \widehat{u}_1(X_i - x)\} / \widehat{\sigma}_0^2$ , with $\widehat{u}_j = n^{-1}\sum_{i=1}^{n}K_h(X_i - x)(X_i - x)^j$ , $j = 0,1,2$ , and $\widehat{\sigma}_0^2 = \widehat{u}_0\widehat{u}_2 - \widehat{u}_1^2$ . + +# 3 Methodology + +# 3.1 Setup + +We consider a transfer learning problem where target data $\{(X_i^{(0)},\nu_i^{(0)})\}_{i = 1}^{n_0}$ are sampled independently from the target population $(X^{(0)},\nu^{(0)})\sim \mathcal{F}_0$ and source data $\{(X_i^{(k)},\nu_i^{(k)})\}_{i = 1}^{n_k}$ are sampled independently from the source population $(X^{(k)},\nu^{(k)})\sim \mathcal{F}_k$ for $k = 1,\ldots ,K$ . The goal is to estimate the target model using both the target data and source data from $K$ related studies. + +For $k = 0, \ldots, K$ , assume $X^{(k)}$ has mean $\theta_{k}$ and covariance $\Sigma_{k}$ , with $\Sigma_{k}$ positive definite. Define the empirical mean and covariance of $\{X_{i}^{(k)}\}_{i=1}^{n_{k}}$ as $\overline{X}_{k} = n_{k}^{-1} \sum_{i=1}^{n_{k}} X_{i}^{(k)}$ and $\widehat{\Sigma}_{k} = \frac{1}{n_{k}} \sum_{i=1}^{n_{k}} (X_{i}^{(k)} - \overline{X}_{k})(X_{i}^{(k)} - \overline{X}_{k})^{\mathrm{T}}$ . For a fixed $x \in \mathbb{R}^{p}$ , the weight function is $s_{G}^{(k)}(x) = 1 + (X_{i}^{(k)} - \theta_{k})^{\mathrm{T}} \Sigma_{k}^{-1}(x - \theta_{k})$ , with the sample version $s_{iG}^{(k)}(x) = 1 + (X_{i}^{(k)} - \overline{X}_{k})^{\mathrm{T}} \widehat{\Sigma}_{k}^{-1}(x - \overline{X}_{k})$ . The target regression function for a given $x \in \mathbb{R}^{p}$ is then $m_{G}^{(0)}(x) = \arg \min_{\mu \in \mathcal{W}} E\{s_{G}^{(0)}(x) d_{\mathcal{W}}^2(\nu^{(0)}, \mu)\}$ . In the following, we present details on transfer learning for global Fréchet regression, where the key difference in the local Fréchet regression setting is the use of a different weight function. The technical details for transfer learning in local Fréchet regression are therefore deferred to Appendix D. + +The set of informative auxiliary samples (informative set) consists of sources sufficiently similar to the target data. Formally, the informative set is defined as $\mathcal{A}_{\psi} = \left\{1 \leq k \leq K : \| f^{(0)}(x) - f^{(k)}(x)\|_2 \leq \psi \right\}$ for some $\psi > 0$ , where $f^{(k)}(x) = E\{s_G^{(k)}(x)F_{\nu^{(k)}}^{-1}\}$ . For simplicity, let $n_{\mathcal{A}} = \sum_{k=1}^{K} n_k$ . + +# 3.2 Wasserstein Transfer Learning + +We propose the Wasserstein Transfer Learning (WaTL) algorithm, which combines information from source datasets under the assumption that all source data are informative enough. This assumption implies that the discrepancies between the source and target are small enough to enhance estimation compared to using only the target. When this condition is met for all source datasets, the informative set is given by $\mathcal{A}_{\psi} = \{1,\dots ,K\}$ . The detailed steps of WaTL are presented in Algorithm 1. + +# Algorithm 1 Wasserstein Transfer Learning (WaTL) + +Input: Target and source data $\{(x_i^{(0)},\nu_i^{(0)})\}_{i = 1}^{n_0}\cup \big(\cup_{1\leq k\leq K}\{(x_i^{(k)},\nu_i^{(k)})\}_{i = 1}^{n_k}\big)$ , regularization parameter $\lambda$ , and query point $x\in \mathbb{R}^p$ + +Output: Target estimator $\widehat{m}_G^{(0)}(x)$ + +1: Weighted auxiliary estimator: $\widehat{f}(x) = \frac{1}{n_0 + n_A} \sum_{k=0}^{K} n_k \widehat{f}^{(k)}(x)$ , where $\widehat{f}^{(k)}(x) = n_k^{-1} \sum_{i=1}^{n_k} s_{iG}^{(k)}(x) F_{\nu_i^{(k)}}^{-1}$ . +2: Bias correction using target data: $\widehat{f}_0(x) = \arg \min_{g\in L^2 (0,1)}\frac{1}{n_0}\sum_{i = 1}^{n_0}s_{iG}^{(0)}(x)\| F_{\nu_i^{(0)}}^{-1} - g\| _2^2 +$ $\lambda \| g - \widehat{f} (x)\| _2$ +3: Projection to Wasserstein space: $\widehat{m}_G^{(0)}(x) = \arg \min_{\mu \in \mathcal{W}}\left\| F_\mu^{-1} - \widehat{f}_0(x)\right\| _2$ + +In Step 1, the initial estimate $\widehat{f}$ aggregates information from both the target and source, weighted by their respective sample sizes. While this step incorporates valuable auxiliary information, the resulting + +# Algorithm 2 Adaptive Wasserstein Transfer Learning (AWaTL) + +Input: Target and source data $\{(x_i^{(0)},\nu_i^{(0)})\}_{i = 1}^{n_0}\cup \big(\cup_{1\leq k\leq K}\{(x_i^{(k)},\nu_i^{(k)})\}_{i = 1}^{n_k}\big)$ , regularization parameter $\lambda$ , prespecified number of informative sources $L$ , and query point $x\in \mathbb{R}^p$ + +Output: Target estimator $\widehat{m}_G^{(0)}(x)$ + +1: Compute discrepancy scores. For each source dataset $k = 1, \dots, K$ , compute the empirical discrepancy: $\widehat{\psi}_k = \| \widehat{f}^{(0)}(x) - \widehat{f}^{(k)}(x)\|_2$ , where $\widehat{f}^{(k)}(x) = n_k^{-1}\sum_{i=1}^{n_k}s_{iG}^{(k)}(x)F_{\nu_i^{(k)}}^{-1}$ . Construct the adaptive informative set by selecting the $L$ smallest discrepancy scores $\widehat{\mathcal{A}} = \{k : 1 \leq k \leq K \text{ and } \widehat{\psi}_k \text{ is among the } L \text{ smallest values}\}$ . +2: Weighted auxiliary estimator: $\widehat{f}(x) = \frac{1}{\sum_{k \in \widehat{\mathcal{A}} \cup \{0\}} n_k} \sum_{k \in \widehat{\mathcal{A}} \cup \{0\}} n_k \widehat{f}^{(k)}(x)$ . +3: Bias correction using target data: $\widehat{f}_0(x) = \arg \min_{g\in L^2 (0,1)}\frac{1}{n_0}\sum_{i = 1}^{n_0}s_{iG}^{(0)}(x)\| F_{\nu_i^{(0)}}^{-1} - g\| _2^2 +$ $\lambda \| g - \widehat{f} (x)\| _2$ +4: Projection to Wasserstein space: $\widehat{m}_G^{(0)}(x) = \arg \min_{\mu \in \mathcal{W}}\left\| F_\mu^{-1} - \widehat{f}_0(x)\right\| _2$ + +estimate may be biased due to distributional differences between the target and source populations. In Step 2, the bias in $\widehat{f}$ is corrected by focusing on the target data. The regularization term $\lambda \| g - \widehat{f}(x)\|_2$ ensures a balance between target-specific precision and auxiliary-informed robustness. Theoretical guidelines for selecting $\lambda$ are provided in Theorem 2. The final step projects the corrected estimate $\widehat{f}_0$ onto the Wasserstein space, ensuring the output $\widehat{m}_G^{(0)}(x)$ respects the intrinsic geometry of $\mathcal{W}$ . This projection exists and is unique because $\mathcal{W}$ is a closed and convex subset of $L^2(0,1)$ . + +# 3.3 Adaptive Selection of Informative Sources + +In many practical scenarios, the assumption that all source datasets belong to the informative set $\mathcal{A}_{\psi}$ may not hold. To address this, we extend WaTL with an adaptive selection procedure to identify the informative set. The discrepancy for each source dataset $k$ is defined as $\psi_{k} = \| f^{(0)}(x) - f^{(k)}(x)\|_{2}$ , which measures the distance between the target distribution and the auxiliary distribution. Since $f^{(0)}(x)$ and $f^{(k)}(x)$ are unknown, we compute an empirical estimate $\widehat{\psi}_k$ for $\psi_{k}$ , which is used to adaptively estimate the informative set $\mathcal{A}_{\psi}$ . To implement this approach, an additional input parameter $L$ , which specifies the approximate number of informative sources, is required. In practice, $L$ can be treated as a tuning parameter and selected through cross-validation or other model selection techniques. The full procedure is formalized in Algorithm 2. + +The proposed algorithm adaptively identifies the informative set $\widehat{\mathcal{A}}$ in Step 1 by evaluating the empirical discrepancy scores $\widehat{\psi}_k$ . The selected set is then used to compute the weighted auxiliary estimator in Step 2, ensuring that only the most relevant source datasets contribute to the final target estimator. Steps 3 and 4 follow the same bias correction and projection procedures as described in Section 3.2. This adaptive approach enhances the robustness of WaTL by excluding irrelevant or highly dissimilar source datasets. + +# 4 Theory + +In this section, we establish the theoretical guarantees of the proposed WaTL and AWaTL algorithms using techniques from empirical process theory [34]. For WaTL, we present the following lemma, which characterizes the convergence rate for each term contributing to the weighted auxiliary estimator $\widehat{f}(x)$ computed in Step 1. + +Condition 1. For $k = 0, \ldots, K$ , the covariate $X^{(k)}$ is sub-Gaussian with $\|X^{(k)}\|_{\Psi_2} \in [\sigma_1, \sigma_2]$ , the mean vector satisfies $\| \theta_k\| \leq R_1$ , and the spectrum of the covariance matrix $\Sigma_k$ lies within the interval $[R_2, R_3]$ . Moreover, $\nu^{(k)}$ is supported on a bounded interval. + +Lemma 1. Let $\widehat{f}^{(k)}(x) = n_k^{-1}\sum_{i=1}^{n_k}s_{iG}^{(k)}(x)F_{\nu_i^{(k)}}^{-1}$ and its population counterpart be defined as $f^{(k)}(x) = E\{s_G^{(k)}(x)F_{\nu^{(k)}}^{-1}\}$ for $k = 0, \ldots, K$ . Then under Condition 1, $\| \widehat{f}^{(k)}(x) - f^{(k)}(x)\|_2 = O_p(n_k^{-1/2})$ . + +To derive the convergence rate of $\widehat{f}(x)$ , we rely on the following condition. + +Condition 2. There exist positive constants $C_1, C_2, C_3, C_4$ such that + +$$ +\sum_ {k = 0} ^ {K} 2 e ^ {- C _ {1} \frac {4}{R _ {3} ^ {2}} n _ {k}} + C _ {2} e ^ {- C _ {3} \left(\frac {2}{R _ {3}} - \sqrt {\frac {C _ {4}}{n _ {k}}}\right) n _ {k}} = o (1), +$$ + +where $R_{3}$ is as in Condition 1. In addition, + +$$ +\frac {\sqrt {n _ {0} + n _ {\mathcal {A}}}}{\min _ {1 \leq k \leq K} n _ {k}} = o (1), \quad \frac {\sqrt {n _ {0} + n _ {\mathcal {A}}}}{n _ {0}} = o (1). +$$ + +Remark 1. These conditions are typically satisfied in practice as they are not overly restrictive. Condition 1 requires that covariates and covariance matrices are bounded in a specific way, which is standard in the transfer learning literature and generally holds in real-world scenarios [5]. In particular, this assumption is common when dealing with high-dimensional data where regularization is necessary [21]. Condition 2 assumes that the number of samples is significantly larger than the number of sources $K$ , which is reasonable since $K$ is usually fixed in practical settings, and we often have sufficient source data compared to target data. In practice, Condition 2 may be slightly violated if there exists a source $k$ with a relatively small $n_k$ , such that $\sqrt{n_0 + n_A} / n_k$ is not $o(1)$ . In such cases, the $k$ th source can simply be excluded from Step 1 of the WaTL algorithm. + +Theorem 1. Suppose Conditions 1 and 2 hold. Then, for the WaTL algorithm, it holds for a fixed $x \in \mathbb{R}^p$ that + +$$ +\| \widehat {f} (x) - f (x) \| _ {2} = O _ {p} \Big (\frac {\sum_ {k = 0} ^ {K} \sqrt {n _ {k}}}{n _ {0} + n _ {\mathcal {A}}} + (n _ {0} + n _ {\mathcal {A}}) ^ {- 1 / 2} \Big), +$$ + +where $f(x) = (n_0 + n_A)^{-1}\sum_{k = 0}^{K}n_kf^{(k)}(x).$ + +The proof of Theorem 1 involves a detailed analysis of the sample covariance matrix and leverages M-estimation techniques within the framework of empirical process theory. The result extends beyond responses in the Wasserstein space, applying to other metric spaces that meet mild regularity conditions. Consequently, Theorem 1 provides a versatile framework that can be applied to transfer learning in regression models with responses such as networks [42], symmetric positive-definite matrices [25], or trees [2]. The following theorem establishes the convergence rate for the estimated regression function $\hat{m}_G^{(0)}(x)$ in the WaTL algorithm. + +Theorem 2. Assume Conditions 1 and 2 hold and the regularization parameter satisfies $\lambda \asymp n_0^{-1/2 + \epsilon}$ for some $\epsilon > 0$ . Then, for the WaTL algorithm and a fixed $x \in \mathbb{R}^p$ , it holds that + +$$ +d _ {\mathcal {W}} ^ {2} (\widehat {m} _ {G} ^ {(0)} (x), m _ {G} ^ {(0)} (x)) = O _ {p} \Big (n _ {0} ^ {- 1 / 2 + \epsilon} \big (\psi + \frac {\sum_ {k = 0} ^ {K} \sqrt {n _ {k}}}{n _ {0} + n _ {\mathcal {A}}} + (n _ {0} + n _ {\mathcal {A}}) ^ {- 1 / 2}) \Big), +$$ + +where $\psi = \max_{1\leq k\leq K}\| f^{(0)}(x) - f^{(k)}(x)\| _2$ quantifies the maximum discrepancy between the target and source. + +Theorem 2 can be compared to the convergence rate of global Fréchet regression [26] applied solely to the target data, for which the rate is $d_{\mathcal{W}}(\widehat{m}_G^{(0)}(x), m_G^{(0)}(x)) = O_p(n_0^{-1/2})$ . The WaTL algorithm achieves a faster convergence rate when there are sufficient source data and the auxiliary sources are informative enough, satisfying $\psi \lesssim n_0^{-1/2 - \epsilon}$ . This result highlights that knowledge transfer from auxiliary samples can significantly enhance the learning performance of the target model, provided the auxiliary sources are closely aligned with the target data. + +For the AWaTL algorithm, we require the following condition. + +Condition 3. The regularization parameter satisfies $\lambda \asymp n_0^{-1/2 + \epsilon}$ for some $\epsilon > 0$ and for some $\epsilon' > \epsilon$ , there exists a non-empty subset $\mathcal{A} \subset \{1, \ldots, K\}$ such that + +$$ +\frac {n _ {*} ^ {- 1 / 2} + n _ {0} ^ {- 1 / 2}}{\min _ {k \in \mathcal {A} ^ {C}} \psi_ {k}} = o (1), \quad \frac {\max _ {k \in \mathcal {A}} \psi_ {k}}{n _ {*} ^ {- 1 / 2} + n _ {0} ^ {- 1 / 2 - \epsilon^ {\prime}}} = O (1), +$$ + +where $\mathcal{A}^C = \{1,\dots ,K\} \backslash \mathcal{A}$ , $n_* = \min_{1\leq k\leq K}n_k$ and $\psi_{k} = \| f^{(0)}(x) - f^{(k)}(x)\|_{2}$ . + +Remark 2. We allow $\mathcal{A}$ to be the whole set $\{1,\dots ,K\}$ , in which case the condition becomes + +$$ +\frac {\max _ {1 \leq k \leq K} \psi_ {k}}{n _ {*} ^ {- 1 / 2} + n _ {0} ^ {- 1 / 2 - \epsilon^ {\prime}}} = O (1). +$$ + +Besides, if $\mathcal{A}$ exists, then it is unique since + +$$ +\{1 \leq k \leq K: \frac {n _ {*} ^ {- 1 / 2} + n _ {0} ^ {- 1 / 2}}{\psi_ {k}} = o (1) \} \cap \{1 \leq k \leq K: \frac {\psi_ {k}}{n _ {*} ^ {- 1 / 2} + n _ {0} ^ {- 1 / 2 - \epsilon^ {\prime}}} = O (1) \} = \emptyset . +$$ + +Condition 3 ensures that the source datasets can be effectively partitioned into two groups: informative ones and those sufficiently different from the target data. This separation guarantees that the informative set can be accurately identified. Under this condition, by setting the number of informative sources to $L = |\mathcal{A}|$ , we establish the following rate of convergence for the AWaTL algorithm. In practice, $L$ can be decided by cross-validation. + +Theorem 3. Under Condition 3 and the conditions of Theorem 2, for the AWaTL algorithm with a fixed number of sources $K$ we have $P(\widehat{\mathcal{A}} = \mathcal{A}) \to 1$ and + +$$ +d _ {\mathcal {W}} ^ {2} (\widehat {m} _ {G} ^ {(0)} (x), m _ {G} ^ {(0)} (x)) = O _ {p} \Big (n _ {0} ^ {- 1 / 2 + \epsilon} \big (\max _ {k \in \mathcal {A}} \psi_ {k} + \frac {\sum_ {k \in \mathcal {A} \cup \{0 \}} \sqrt {n _ {k}}}{\sum_ {k \in \mathcal {A} \cup \{0 \}} n _ {k}} + \big (\sum_ {k \in \mathcal {A} \cup \{0 \}} n _ {k}) ^ {- 1 / 2} \big) \Big). +$$ + +The convergence rate of the AWaTL algorithm simplifies to $n_0^{-1 - \epsilon' + \epsilon}$ when the informative source data is sufficiently large. This rate surpasses that of global Fréchet regression [26] applied solely to the target data, offering a theoretical guarantee that AWaTL effectively mitigates negative transfer by selectively integrating relevant auxiliary information. + +While the above analysis assumes that each probability distribution is fully observed, an assumption commonly adopted in the distributional data literature [26, 7], real-world applications often provide only independent samples drawn from underlying distributions. In such cases, this limitation can be overcome by replacing unobservable distributions with their empirical counterparts, constructed from sample observations [43]. Additional details and theoretical justification for this extension are provided in Appendix E. + +# 5 Numerical Experiments + +In this section, we evaluate the performance of the proposed WaTL algorithm, alongside two baseline approaches: the global Fréchet regression using only target data (Only Target) and using only source data (Only Source). Consider $K = 5$ source sites. The data are generated as follows. For the target population, we sample $X^{(0)} \sim \mathrm{U}(0,1)$ and generate the response distribution, represented by its quantile function, as + +$$ +F _ {\nu^ {(0)}} ^ {- 1} (u) = w ^ {(0)} (1 - u) u + (1 - X ^ {(0)}) u + X ^ {(0)} F _ {Z ^ {(0)}} ^ {- 1} (u), \quad u \in (0, 1), +$$ + +where $Z^{(0)} \sim N(0.5,1)|_{(0,1)}$ and $w^{(0)} \sim N(0,1)|_{(-0.5,0.5)}$ . Here, $N(\mu, \sigma^2)|_{(a,b)}$ denotes a normal distribution with mean $\mu$ and variance $\sigma^2$ , truncated to the interval $(a,b)$ . For source populations, we define $\psi_k = 0.1k$ for $k = 1,\dots,K$ , and generate $X^{(k)} \sim \mathrm{U}(0,1)$ . The corresponding response distribution is generated as + +$$ +F _ {\nu^ {(k)}} ^ {- 1} (u) = w ^ {(k)} (1 - u) u + (1 - X ^ {(k)}) u + X ^ {(k)} F _ {Z ^ {(k)}} ^ {- 1} (u), \quad u \in (0, 1), +$$ + +where $Z^{(k)} \sim N(0.5,1 - \psi_k)|_{(0,1)}$ and $w^{(k)} \sim N(0,1)|_{(-0.5,0.5)}$ . Consequently, for each predictor $x$ , the true regression function is $m_G^{(k)}(x) = (1 - x)u + xF_{Z^{(k)}}^{-1}(u)$ , for $k = 0,1,\ldots ,K$ . + +We vary the target sample size $n_0$ from 200 to 800, while the source sample size is set as $n_k = k\tau$ , where $\tau \in \{100, 200\}$ and $k = 1, \dots, K$ . The regularization parameter $\lambda$ in Algorithm 1 is selected via five-fold cross-validation, ranging from 0 to 3 in increments of 0.1. To evaluate performance, we sample 100 predictors uniformly from the target distribution. Using Algorithm 1, we compute + +$\widehat{m}_{G}^{(0)}(x)$ and compare it with the corresponding estimates obtained from global Fréchet regression using only target or source data. Performance is assessed using the root mean squared prediction risk $\mathrm{RMSPR} = \sqrt{\frac{1}{100}\sum_{i=1}^{100}d_{\mathcal{W}}^2\bigl(\widehat{m}_{G}^{(0)}(x_i),m_{G}^{(0)}(x_i)\bigr)}$ , where $x_i$ denotes the sampled predictor, $\widehat{m}_{G}^{(0)}(x_i)$ is the estimated function, and $m_{G}^{(0)}(x_i)$ represents the ground truth. To ensure robustness, we repeat the simulation 50 times and report the average RMSPR, as shown in Figure 1(a). + +As shown in Figure 1(a), WaTL consistently outperforms global Fréchet regression trained solely on target or source data. When the target sample size $n_0$ is small, the Only Target method exhibits a high RMSPR due to the instability of models trained on limited data. In contrast, WaTL significantly reduces RMSPR by effectively incorporating auxiliary information from the source domain. As $n_0$ increases, the performance of Only Target improves and gradually approaches that of WaTL, which is expected as larger sample sizes lead to more stable and accurate estimators. Nevertheless, WaTL maintains a consistent advantage across all $n_0$ , suggesting that it leverages complementary information from the source. The performance of the Only Source estimator remains nearly unchanged across different $n_0$ values, as it does not benefit from additional target data. + +Comparing the two panels of Figure 1(a), we also observe that WaTL improves as the source sample size increases, confirming its ability to effectively integrate information from source domains. This demonstrates the benefit of multi-source transfer learning, where WaTL balances knowledge from both target and source domains to achieve improved prediction. + +To better understand when negative transfer may occur, we conduct an ablation study with $K = 1$ source and vary the similarity parameter $\psi_{1}$ from 0.01 to 1 in increments of 0.01, with $n_0 = 100$ and $n_1 = 200$ . Our method outperforms the Only Target approach when $\psi_{1} < 0.9$ , while for $\psi_{1} \geq 0.9$ , the Only Target method becomes preferable. This confirms that negative transfer arises when the source is too dissimilar to the target. + +We further evaluate the effectiveness of AWaTL in selecting informative sources. In this experiment, we set $L = 2$ and $n_k = 100$ for $k = 0, \dots, 5$ . The similarity parameters are specified as $\psi_k = 0.1$ for $k = 1, 2$ (informative sources), and $\psi_k = \psi$ , increasing from 0.2 to 1 in increments of 0.1, for $k = 3, 4, 5$ (uninformative sources). Each configuration is repeated 100 times, and the corresponding selection rates are reported in Figure 1(b). The results show that AWaTL successfully identifies informative sources, with selection rates for sources 1 and 2 rapidly increasing and reaching perfect accuracy once $\psi > 0.6$ . These findings demonstrate the robustness of AWaTL in distinguishing useful sources under varying similarity levels. + +![](images/18e64acd3e3b30e181e93be5e0c4a0f18a6b9eb437c07f3d7cf7f3a429690582.jpg) +(a) + +![](images/adec346c4b7353eea6dc69be260a346123a9fbfc3980b11d3cd10004c2456d2b.jpg) +(b) +Figure 1: (a) Root mean squared prediction risk (RMSPR) of WaTL, only Source, and Only Target methods under varying target sample sizes, with source sample sizes $\tau = 100$ (left) and $\tau = 200$ (right); (b) Selection rate of each source site as $\psi$ increases. + +# 6 Real-world Applications + +We evaluate the WaTL algorithm using data from the National Health and Nutrition Examination Survey (NHANES) 2005-2006 $^3$ , focusing on modeling the distribution of physical activity intensity. NHANES is a large-scale health survey in the United States that combines interviews with physical examinations to assess the health and nutrition of both adults and children. The dataset includes extensive demographic, socioeconomic, dietary, and medical assessments, providing a comprehensive resource for health-related research. + +During the 2005-2006 NHANES cycle, participants aged 6 and older wore an ActiGraph 7164 accelerometer on their right hip for seven days, recording physical activity intensity in 1-minute epochs. Participants were instructed to remove the device during water-based activities and sleep. The device measured counts per minute (CPM), ranging from 0 to 32767, capturing variations in activity levels throughout the monitoring period. + +Since female and male participants exhibit distinct physical activity patterns [12], we analyze them separately. Physical activity intensity is influenced by multiple factors, and we consider body mass index (BMI) and age as key predictors [19, 9]. To accommodate potential nonlinear relationships, we implement local Fréchet regression within the WaTL algorithm, treating the distribution of physical activity intensity as the response and BMI and age as predictors. + +Following the data preprocessing steps in [23], we remove unreliable observations per NHANES protocols. For each participant, we exclude activity counts above 1000 CPM or equal to zero (as zeros may correspond to various low-activity states such as sleep or swimming). Participants with fewer than 100 valid observations or missing BMI, age, or gender information are also excluded. The remaining activity counts over seven days are concatenated to form the distribution of each participant's activity intensity. + +To evaluate the WaTL, we set White, Mexican Americans, and other Hispanic individuals as sources and Black as the target. For females, the source data include 1308 White people, 884 Mexican Americans, and 108 Other Hispanic individuals. For males, the source data include 1232 White participants, 805 Mexican Americans, and 92 Other Hispanic individuals. We set 200 Black participants as the target data for both genders. During evaluating, we perform five-fold cross-validation, using four folds for training and one for testing, cycling through each fold. For comparison, we also apply local Fréchet regression using only the target data. The results, summarized in Figure 2(a), show that WaTL improves performance over local Fréchet regression for both females and males, demonstrating its ability to leverage information from other demographic groups to enhance modeling for Black participants. + +![](images/210f2f696fca11f3363a2262a9bddad5b89e6dd3a42cf95d3544234f66e033a6.jpg) +(a) + +![](images/119e0566c0e6e4f5ad4e9dd8fdab7867d7e9600e68924c32af7b4dabf6f9b938.jpg) +(b) +Figure 2: (a) Root mean squared prediction risk (RMSPR) of WaTL and Only Target methods for females and males, evaluated using five-fold cross-validation; (b) Cumulative distribution function of physical activity levels for one selected female (left) and one selected male (right), along with estimates from WaTL and Only Target methods. + +To further illustrate the effectiveness of WaTL, we visualize the cumulative distribution function of physical activity levels for one female participant and one male participant in Figure 2(b), along with estimates from WaTL and local Fréchet regression, using only the target data. The results indicate that WaTL provides a better fit to the true distribution, outperforming the estimate obtained using only the target data. We further evaluate the robustness of WaTL on a human mortality dataset in Appendix A, where WaTL continues to demonstrate strong performance. + +# 7 Conclusion + +We introduce Wasserstein transfer learning and its adaptive variant for scenarios where the informative set is unknown, addressing the challenges posed by the lack of linear operations in the Wasserstein space. By leveraging the Wasserstein metric, the proposed algorithm accounts for the non-Euclidean structure of distributional outputs, ensuring compatibility with the intrinsic geometry of the Wasserstein space. Supported by rigorous theoretical guarantees, the framework demonstrates improved estimation performance compared to methods that rely solely on target data. + +This paper focuses on univariate distributions, which arise frequently in real-world applications such as the NHANES and human mortality studies discussed in Section 6. The proposed framework, however, is not limited to the univariate case and can be extended to multivariate distributions by incorporating metrics such as the Sinkhorn [11] or sliced Wasserstein distance [4]. Extending the methodology to higher dimensions is an exciting future direction that entails addressing both computational and theoretical challenges specific to high-dimensional Wasserstein spaces. + +# Acknowledgments and Disclosure of Funding + +We thank the Area Chair and the reviewers for their constructive feedback. Doudou Zhou was supported by the NUS Start-Up Grant (A-0009985-00-00) and the MOE AcRF Tier 1 Grant (A-8003569-00-00). + +# References + +[1] Jérémie Bigot, Raul Gouet, Thierry Klein, and Alfredo López. Geodesic PCA in the Wasserstein space by convex PCA. Annales de l'Institut Henri Poincaré B: Probability and Statistics, 53:1-26, 2017. +[2] Louis J. Billera, Susan P. Holmes, and Karen Vogtmann. Geometry of the space of phylogenetic trees. Advances in Applied Mathematics, 27(4):733-767, 2001. +[3] Sergey Bobkov and Michel Ledoux. One-dimensional Empirical Measures, Order Statistics, and Kantorovich Transport Distances, volume 261. American Mathematical Society, 2019. +[4] Nicolas Bonneel, Julien Rabin, Gabriel Peyre, and Hanspeter Pfister. Sliced and Radon Wasserstein barycenters of measures. Journal of Mathematical Imaging and Vision, 51(1):22-45, 2015. +[5] T. Tony Cai, Dongwoo Kim, and Hongming Pu. Transfer learning for functional mean estimation: Phase transition and adaptive algorithms. Annals of Statistics, 52(2):654 - 678, 2024. +[6] T Tony Cai and Hongming Pu. Transfer learning for nonparametric regression: Non-asymptotic minimax analysis and adaptive procedure. arXiv preprint arXiv:2401.12272, 2024. +[7] Yaqing Chen, Zhenhua Lin, and Hans-Georg Müller. Wasserstein regression. Journal of the American Statistical Association, 118(542):869–882, 2023. +[8] Keunwoo Choi, George Fazekas, Mark Sandler, and Kyunghyun Cho. Transfer learning for music classification and regression tasks. In The 18th International Society of Music Information Retrieval (ISMIR) Conference 2017, Suzhou, China. International Society of Music Information Retrieval, 2017. + +[9] Laura Cleven, Jeremy A Syrjanen, Yonas E Geda, Luke R Christenson, Ronald C Petersen, Maria Vassilaki, Alexander Woll, and Janina Krell-Roesch. Association between physical activity and longitudinal change in body mass index in middle-aged and older adults. BMC Public Health, 23(1):202, 2023. +[10] Nicolas Courty, Rémi Flamary, Amaury Habrard, and Alain Rakotomamonjy. Joint distribution optimal transportation for domain adaptation. In Advances in Neural Information Processing Systems, volume 30, pages 3730-3739, 2017. +[11] Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In C.J. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc., 2013. +[12] Alessandra Feraco, Stefania Gorini, Elisabetta Camajani, Tiziana Filardi, Sercan Karav, Edda Cava, Rocky Strollo, Elvira Padua, Massimiliano Caprio, Andrea Armani, and Mauro Lombardo. Gender differences in dietary patterns and physical activity: An insight with principal component analysis (PCA). Journal of Translational Medicine, 22(1):1112, 2024. +[13] Maurice Fréchet. Les éléments aléatoires de nature quelconque dans un espace distancié. Annales de l'Institut Henri Poincaré, 10(4):215-310, 1948. +[14] Laya Ghodrati and Victor M Panaretos. Distribution-on-distribution regression via optimal transport maps. Biometrika, 109(4):957-974, 2022. +[15] Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. Geodesic flow kernel for unsupervised domain adaptation. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 2066-2073, 2012. +[16] Jui-Ting Huang, Jinyu Li, Dong Yu, Li Deng, and Yifan Gong. Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 7304-7308, 2013. +[17] Su I Iao, Yidong Zhou, and Hans-Georg Müller. Deep Fréchet regression. Journal of the American Statistical Association, 2025. in press. +[18] Leonid V Kantorovich. On the translocation of masses. Dokl. Akad. Nauk SSSR (translated version in Journal of Mathematical Sciences, 133, 1381-1382, 2006), 37:227-229, 1942. +[19] Thomas Klein and Simone Becker. Age and exercise: a theoretical and empirical analysis of the effect of age and generation on physical activity. Journal of Public Health, 20:11-21, 2012. +[20] Benoit Kloeckner. A geometric study of Wasserstein spaces: Euclidean spaces. Annali della Scuola Normale Superiore di Pisa-Classe di Scienze, 9(2):297-323, 2010. +[21] Sai Li, T Tony Cai, and Hongzhe Li. Transfer learning for high-dimensional linear regression: Prediction, estimation and minimax optimality. Journal of the Royal Statistical Society Series B: Statistical Methodology, 84(1):149-173, 2022. +[22] Haotian Lin and Matthew Reimherr. Smoothness adaptive hypothesis transfer learning. In Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp, editors, Proceedings of the 41st International Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Research, pages 30286-30316. PMLR, 21-27 Jul 2024. +[23] Zhenhua Lin, Dehan Kong, and Linbo Wang. Causal inference on distribution functions. Journal of the Royal Statistical Society Series B: Statistical Methodology, 85(2):378-398, 2023. +[24] Victor M Panaretos and Yoav Zemel. An Invitation to Statistics in Wasserstein Space. Springer New York, 2020. +[25] Xavier Pennec, Pierre Fillard, and Nicholas Ayache. A Riemannian framework for tensor computing. International Journal of Computer Vision, 66(1):41-66, 2006. +[26] Alexander Petersen and Hans-Georg Müller. Fréchet regression for random objects with Euclidean predictors. Annals of Statistics, 47(2):691–719, 2019. + +[27] Alexander Petersen, Chao Zhang, and Piotr Kokoszka. Modeling probability density functions as data objects. Econometrics and Statistics, 21:159-178, 2022. +[28] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67, 2020. +[29] Ievgen Redko, Amaury Habrard, and Marc Sebban. Theoretical analysis of domain adaptation with optimal transport. In Michelangelo Ceci, Jaakko Hollmén, Ljupčo Todorovski, Celine Vens, and Sašo Džeroski, editors, Machine Learning and Knowledge Discovery in Databases, pages 737–753, Cham, 2017. Springer International Publishing. +[30] Hoo-Chang Shin, Holger R Roth, Mingchen Gao, Le Lu, Ziyue Xu, Isabella Nogues, Jianhua Yao, Daniel Mollura, and Ronald M Summers. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Transactions on Medical Imaging, 35(5):1285–1298, 2016. +[31] Ye Tian and Yang Feng. Transfer learning under high-dimensional generalized linear models. Journal of the American Statistical Association, 118(544):2684-2697, 2023. +[32] Lisa Torrey and Jude Shavlik. Transfer learning. In Handbook of research on machine learning applications and trends: algorithms, methods, and techniques, pages 242-264. IGI Global, 2010. +[33] Aad W. van der Vaart. Asymptotic Statistics. Asymptotic Statistics. Cambridge University Press, 2000. +[34] Aad W. van der Vaart and John Wellner. Weak Convergence and Empirical Processes: With Applications to Statistics. Springer New York, 2nd edition, 2023. +[35] Roman Vershynin. High-Dimensional Probability: An Introduction with Applications in Data Science. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2018. +[36] Cédric Villani. Optimal Transport: Old and New, volume 338. Springer, 2009. +[37] Martin J. Wainwright. High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2019. +[38] Karl Weiss, Taghi M Khoshgoftaar, and DingDing Wang. A survey of transfer learning. Journal of Big Data, 3(9):1-40, 2016. +[39] Chao Zhang, Piotr Kokoszka, and Alexander Petersen. Wasserstein autoregressive models for density time series. Journal of Time Series Analysis, 43(2):30-52, 2022. +[40] Doudou Zhou, Mengyan Li, Tianxi Cai, and Molei Liu. Model-assisted and knowledge-guided transfer regression for the underrepresented population. arXiv preprint arXiv:2410.06484, 2024. +[41] Doudou Zhou, Molei Liu, Mengyan Li, and Tianxi Cai. Doubly robust augmented model accuracy transfer inference with high dimensional features. Journal of the American Statistical Association, pages 1-26, 2024. +[42] Yidong Zhou and Hans-Georg Müller. Network regression with graph Laplacians. Journal of Machine Learning Research, 23(320):1–41, 2022. +[43] Yidong Zhou and Hans-Georg Müller. Wasserstein regression with empirical measures and density estimation for sparse data. Biometrics, 80(4):ujae127, 2024. +[44] Changbo Zhu and Hans-Georg Müller. Autoregressive optimal transport models. Journal of the Royal Statistical Society Series B: Statistical Methodology, 85(3):1012–1033, 2023. + +# NeurIPS Paper Checklist + +# 1. Claims + +Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? + +Answer: [Yes] + +Justification: The abstract and introduction accurately state the paper's contributions, including the proposal of the WaTL framework, theoretical analysis of convergence rates, and validation through simulations and real-world data. These are detailed in subsequent sections (e.g., Section 1.1, Section 3, Section 4). + +Guidelines: + +- The answer NA means that the abstract and introduction do not include the claims made in the paper. +- The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. +- The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. +- It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. + +# 2. Limitations + +Question: Does the paper discuss the limitations of the work performed by the authors? + +Answer: [Yes] + +Justification: The paper discusses a limitation concerning the focus on univariate distributions and suggests how this could be addressed in practical settings. This is found in Section 7. + +Guidelines: + +- The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. +- The authors are encouraged to create a separate "Limitations" section in their paper. +- The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. +- The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. +- The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. +- The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. +- If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. +- While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. + +# 3. Theory assumptions and proofs + +Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? + +Answer: [Yes] + +Justification: Theoretical results (Lemmas and Theorems) are presented in Section 4, with assumptions clearly stated (Conditions 1-3). The paper mentions that detailed proofs are provided in Appendix C. + +Guidelines: + +- The answer NA means that the paper does not include theoretical results. +- All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced. +- All assumptions should be clearly stated or referenced in the statement of any theorems. +- The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. +- Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. +- Theorems and Lemmas that the proof relies upon should be properly referenced. + +# 4. Experimental result reproducibility + +Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? + +Answer: [Yes] + +Justification: The paper details the WaTL and AWaTL algorithms (Algorithms 1 and 2 in Section 3), the data generation process for simulations (Section 5), and the preprocessing steps for the real-world NHANES data application and human mortality data application (Section 6, Appendix A), providing sufficient information to understand how the main experimental results were obtained. + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. +- If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. +- Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general, releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. +- While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example +(a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. +(b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. +(c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). + +(d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. + +# 5. Open access to data and code + +Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? + +Answer: [Yes] + +Justification: The paper uses publicly available datasets, including NHANES (Section 6) and human mortality data (Appendix A), with URLs provided. The code used to generate all results is included in the submission as a zip folder, supporting reproducibility of the experiments. + +Guidelines: + +- The answer NA means that paper does not include experiments requiring code. +- Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details. +- While we encourage the release of code and data, we understand that this might not be possible, so "No" is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). +- The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details. +- The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. +- The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. +- At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). +- Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. + +# 6. Experimental setting/details + +Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? + +Answer: [Yes] + +Justification: The paper specifies experimental settings, including sample sizes, data generation procedures for simulations (Section 5), data preprocessing for real-world applications (Section 6), and how hyperparameters like $\lambda$ were chosen (five-fold cross-validation in Section 5). + +Guidelines: + +- The answer NA means that the paper does not include experiments. +- The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. +- The full details can be provided either with the code, in appendix, or as supplemental material. + +# 7. Experiment statistical significance + +Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? + +# Answer: [No] + +Justification: The paper reports average RMSPR over 50 repetitions for simulations (Section 5) and results from cross-validation for real data (Section 6), but error bars, confidence intervals, or formal statistical significance tests for these experimental results are not explicitly reported in the figures or text. + +# Guidelines: + +- The answer NA means that the paper does not include experiments. +- The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. +- The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). +- The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) +- The assumptions made should be given (e.g., Normally distributed errors). +- It should be clear whether the error bar is the standard deviation or the standard error of the mean. +- It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96 +- For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). +- If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. + +# 8. Experiments compute resources + +Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? + +# Answer: [No] + +Justification: This paper introduces a new transfer learning method for regression with distributional outputs, emphasizing methodological and theoretical contributions rather than computational efficiency. The approach is not compute-intensive and was implemented using standard CPU resources, so details on compute infrastructure and execution time were not included. + +# Guidelines: + +- The answer NA means that the paper does not include experiments. +- The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. +- The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. +- The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). + +# 9. Code of ethics + +Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines? + +# Answer: [Yes] + +Justification: We have reviewed the NeurIPS Code of Ethics and confirm that the research conforms to it. The work focuses on algorithmic development and utilizes publicly available, de-identified data for real-world applications (NHANES and human mortality datasets). + +# Guidelines: + +- The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. +- If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. +- The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). + +# 10. Broader impacts + +Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? + +Answer: [NA] + +Justification: The paper primarily focuses on the methodological and theoretical contributions of the proposed transfer learning framework. A dedicated discussion of potential positive and negative societal impacts is not included, though the real-world applications (Sections 6) in health and demography hint at positive uses. + +Guidelines: + +- The answer NA means that there is no societal impact of the work performed. +- If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. +- Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. +- The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. +- The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. +- If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). + +# 11. Safeguards + +Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? + +Answer: [NA] + +Justification: The research introduces new regression algorithms and applies them to existing public datasets. It does not involve the release of new pretrained models or datasets that pose a high or direct risk for misuse in the manner of generative AI or sensitive scraped data. + +Guidelines: + +- The answer NA means that the paper poses no such risks. +- Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. +- Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. + +- We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. + +# 12. Licenses for existing assets + +Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? + +Answer: [Yes] + +Justification: The paper credits the NHANES and human mortality datasets and provides the related URLs (Section 6). + +Guidelines: + +- The answer NA means that the paper does not use existing assets. +- The authors should cite the original paper that produced the code package or dataset. +- The authors should state which version of the asset is used and, if possible, include a URL. +- The name of the license (e.g., CC-BY 4.0) should be included for each asset. +- For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. +- If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. +- For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. +- If this information is not available online, the authors are encouraged to reach out to the asset's creators. + +# 13. New assets + +Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? + +Answer: [NA] + +Justification: The paper introduces new algorithms (WaTL and AWaTL) which are described in detail (Algorithms 1 and 2, Section 3). No new datasets or software/models are being released as downloadable assets alongside the paper that would require separate documentation files. + +Guidelines: + +- The answer NA means that the paper does not release new assets. +- Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. +- The paper should discuss whether and how consent was obtained from people whose asset is used. +- At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. + +# 14. Crowdsourcing and research with human subjects + +Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? + +Answer: [NA] + +Justification: The research uses existing, publicly available, and de-identified datasets (NHANES, human mortality data from Section 6) involving human subjects. It does not involve new data collection through crowdsourcing or direct interaction with human subjects by the authors. Therefore, details of participant instructions or compensation for the original data collection are not applicable to this paper's contribution. + +# Guidelines: + +- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. +- Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. +- According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. + +# 15. Institutional review board (IRB) approvals or equivalent for research with human subjects + +Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? + +# Answer: [NA] + +Justification: The research involves the secondary analysis of publicly available and de-identified datasets (NHANES, human mortality data from Section 6). Such research typically does not require a new IRB approval for the current authors, as ethical oversight and risk disclosure were presumably handled by the original data collectors. The paper does not describe new risks to participants arising from this secondary analysis. + +# Guidelines: + +- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. +- Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. +- We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. +- For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. + +# 16. Declaration of LLM usage + +Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required. + +# Answer: [NA] + +Justification: The core methodology of this research focuses on transfer learning for regression models with distributional outputs in Wasserstein space and does not involve the use of Large Language Models as an important, original, or non-standard component. + +# Guidelines: + +- The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components. +- Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described. + +# A Human Mortality Data + +We further assess WaTL using the age-at-death distributions from 162 countries in 2015, compiled from the United Nations Databases4 and the UN World Population Prospects 20225. The dataset provides country-level age-specific death counts, which we convert into smooth age-at-death densities using local linear smoothing; see Figure 3 (a) for an illustration. For this analysis, we define the 24 developed countries as the target site and the remaining 138 developing countries as the source site. + +We evaluate the performance of WaTL against several key baselines on this dataset. These include models trained only on the target data (Only Target), only on the source data (Only Source), and on a naive pooling of both (Target + Source). Furthermore, for these baselines, we compare the performance of our underlying Fréchet regression framework against Wasserstein Regression [7], another state-of-the-art method for distributional data. + +Table 1: Performance and Training Time with Varying Target Sample Sizes. + +
Number of Target SamplesRMSPRTraining Time (ms)
140.0280.598
190.0250.597
240.0220.694
+ +The comprehensive results are presented in Figure 3 (b). The proposed WaTL method achieves the lowest Root Mean Squared Prediction Risk (RMSPR) of 0.022, demonstrating a marked improvement over all alternatives. Notably, it significantly outperforms models trained solely on the 24 target samples (RMSPR of 0.027 for Fréchet Regression and 0.025 for Wasserstein Regression) and the naive data pooling approach (RMSPR of 0.033). This highlights that simply combining datasets is insufficient to overcome the domain shift between developing and developed countries, validating the necessity of our bias-correction mechanism. + +To further assess WaTL's robustness and practicality, especially in common scenarios with limited target data, we analyze its performance with a varying number of target samples. As shown in Table 1, the model's predictive accuracy consistently improves as the target sample size increases from 14 to 24. It is also worth noting that the method maintains exceptional computational efficiency, with training times remaining under one millisecond. These findings confirm that WaTL is not only highly accurate but also a robust and efficient solution for real-world demographic studies where target data can be scarce. + +![](images/49d23c6c00560204e8754e2b8b38896167504e21ead0fa679fe94e6316e5467b.jpg) +(a) + +![](images/4276ec0dcaef16549c0941e10a62c9f81e6f9885a75e4af4a991a490b2836ab5.jpg) +(b) +Figure 3: (a) Age-at-death densities of developed and developing countries; (b) Root mean squared prediction risk (RMSPR) of WaTL and Only Target methods for human mortality data. + +# B Additional Notations + +For any $g_1, g_2 \in L^2(0,1)$ , the $L^2$ inner product is defined as + +$$ +\langle g _ {1}, g _ {2} \rangle_ {2} = \int_ {0} ^ {1} g _ {1} (z) g _ {2} (z) d z. +$$ + +The total variation of a function $g$ on the interval $(a,b)$ is + +$$ +V _ {a} ^ {b} (g) = \lim _ {n \to + \infty} \sup _ {a < x _ {1} < \ldots < x _ {n} < b} \sum_ {i = 1} ^ {n} | g (x _ {i}) - g (x _ {i - 1}) |. +$$ + +The space of bounded variation functions is given by + +$$ +B V ((0, 1), H) = \{g \colon (0, 1) \to (- H, H) | V _ {a} ^ {b} (g) \leq H \}, +$$ + +which is equipped with the $L^2$ metric. + +For a matrix $\Sigma$ , let $\|\Sigma\|$ and $\|\Sigma\|_{\mathrm{F}}$ denote its operator norm and Frobenius norm, respectively, and let $\gamma_i(\Sigma)$ be its $i$ th smallest singular value. + +We denote the characteristic function of a set $\mathcal{A}$ by $1_{\mathcal{A}}$ and use $|\mathcal{A}|$ to represent its cardinality. In a metric space $(\Omega, d)$ , the covering number $\mathcal{N}(K, d, \epsilon)$ represents the smallest number of closed balls of radius $\epsilon$ required to cover a subset $K \subset \Omega$ . The notation $\bigsqcup$ is used to denote the disjoint union of sets. + +For notational simplicity, we will omit $(x)$ from $f(x)$ when the meaning is clear from the context. + +# C Proof + +Lemma 2. Consider $U: \mathcal{W} \times (0,1) \mapsto \mathbb{R}$ , where $U(\nu, x) \coloneqq F_{\nu}^{-1}(x)$ and $\mathcal{W} \times (0,1)$ is endowed with the product $\sigma$ -algebra generated by the Borel algebra on $\mathcal{W}$ and the Borel algebra on $(0,1)$ . The first Borel algebra $\mathcal{B}(\mathcal{W})$ is generated by open balls in $(\mathcal{W}, d_{\mathcal{W}})$ and the second $\mathcal{B}((0,1))$ is generated by Euclidean open balls. Then $U$ is measurable. + +Lemma 2 enables a simplified expression for the conditional Fréchet mean of $\nu$ given $X = x$ by applying Fubini's theorem. Specifically, we have + +$$ +\begin{array}{l} m(x) = \operatorname *{arg min}_{\mu \in \mathcal{W}}E\{d^{2}_{\mathcal{W}}(\nu ,\mu)|X = x\} \\ = \operatorname *{arg min}_{\mu \in \mathcal{W}}E\{d^{2}_{L^{2}}(F_{\nu}^{-1},F_{\mu}^{-1})|X = x\} \\ = \operatorname * {a r g m i n} _ {\mu \in \mathcal {W}} d _ {L ^ {2}} ^ {2} (E \{F _ {\nu} ^ {- 1} | X = x \}, F _ {\mu} ^ {- 1}), \\ \end{array} +$$ + +where the last equality follows from Fubini's theorem, since $E\{F_{\nu}^{-1}|X = x\}$ is measurable. A similar argument applies to $m_G(x)$ and $\hat{m}_G(x)$ . + +# C.1 Proof of Lemma 2 + +Proof. Without loss of generality, It suffices to prove $\{(\nu ,x)|F_{\nu}^{-1}(x) > 0\}$ is measurable. Note that any quantile function is left continuous, hence $\{(\nu ,x)|F_{\nu}^{-1}(x) > 0\} = \cup_{q\in \mathbb{Q}\cap (0,1)}\{(\nu ,q)|F_{\nu}^{-1}(q)>$ $0\}$ . + +Then without loss of generality, we suffice to prove $\{(\nu, 0.5)|F_{\nu}^{-1}(0.5) > 0)\}$ is measurable. Since $\mathcal{W}$ is separable, we can select a dense countable subset $K$ . We then define $A = \{\nu \in K|F_{\nu}^{-1}(0.5) > 0\}$ . Assuming $A = \{\nu_i, i \in \mathbb{N}\}$ , we have $\{\nu|F_{\nu}^{-1}(0.5) > 0\} = \{\nu|\underline{\lim}_{i \to \infty}d_{\mathcal{W}}(\nu_i,\nu) = 0\} = \lim_{n \to \infty}\overline{\lim}_{i \to \infty}B_{\frac{1}{n}}(\nu_i)$ where $B_{\epsilon}(\nu)$ is the ball centered at $\nu$ with radius $\epsilon$ . Here we remark that both left continuous and monotone increasing are used in the first equation. Hence $U$ is measurable. + +# C.2 Proof of Lemma 1 + +Proof. First, it is easy to check that $f^{(k)} \in BV((0,1), H_0)$ for some $H_0 > 0$ since $f^{(k)} = E\{s_G^{(k)}(x) 1_{\{s_G^{(k)}(x) > 0\}} F_{\nu^{(k)}}^{-1}\} - E\{|s_G^{(k)}(x)| 1_{\{s_G^{(k)}(x) < 0\}} F_{\nu^{(k)}}^{-1}\}$ and $E|s_G^{(k)}(x)| < \infty$ . By taking $H_0$ large enough, We also claim that $P(\widehat{f}^{(k)} \in BV((0,1), H_0)) \to 1$ since $\widehat{f}^{(k)}$ also has a similar decomposition and + +$$ +\begin{array}{l} \frac {1}{n _ {k}} \sum_ {i = 1} ^ {n _ {k}} | s _ {i G} ^ {(k)} (x) | \leq \frac {1}{n _ {k}} \sum_ {i = 1} ^ {n _ {k}} (| s _ {i G} ^ {(k)} (x) - s _ {i} ^ {(k)} (x) | + | s _ {i} ^ {(k)} (x) |) \\ \leq o _ {p} (1) + H _ {1} \\ \end{array} +$$ + +for some $H_1 > 0$ with high probability, where $s_i^{(k)}(x) = 1 + (X_i^{(k)} - \theta_k)\Sigma_k^{-1}(x - \theta_k)$ . The last inequality holds because of the result given by the proof of Theorem 2 in [26] that $\frac{1}{n_k}\sum_{i=1}^{n_k}|s_{iG}^{(k)}(x) - s_i^{(k)}(x)| = o_p(1)$ and the assumption that $X^{(k)}$ is subguassian. + +To use Theorem 2 in [26], taking $\Omega$ to be $BV((0,1),H_0)$ , $M(g,x)$ to be $M^{(k)}(g,x) = E\{s_{G}^{(k)}(x)\| F_{\nu^{(k)}}^{-1} - g\| _2^2\}$ for any $g\in BV((0,1),H_0)$ , it suffices to check that the other two assumptions of the theorem: + +$$ +\int_ {0} ^ {1} \sqrt {1 + \log \mathcal {N} (B _ {\delta} (f ^ {(k)}) \cap B V ((0 , 1) , H _ {0}) , d _ {L ^ {2}} , \delta \epsilon)} d \epsilon = O (1) +$$ + +as $\delta \to 0$ and + +$$ +M ^ {(k)} (g, x) - M ^ {(k)} \left(f ^ {(k)}, x\right) \geq \| f ^ {(k)} - g \| _ {2} ^ {2}. +$$ + +Example 19.11 in [33] gives a bound of the covering number of the space $(BV((0,1),H_0),d_{L^2})$ , + +$$ +\mathcal {N} (B V ((0, 1), H _ {0}), d _ {L ^ {2}}, \epsilon) \leq e ^ {\frac {K}{\epsilon}} +$$ + +for some $K > 0$ where $K$ is independent of $\epsilon$ . + +The rest is similar to the proof of Proposition 1 in [26]. For $g \in BV((0,1), H_0)$ , $B_{\gamma}(g)$ denotes the $L^2$ ball of radius $\gamma$ centered at $g$ . Let $\mathcal{C}_{\epsilon}(f^{(k)}) \coloneqq \{g_u : u \in U\}$ such that $|U| = |\mathcal{N}(BV((0,1), H_0) \cap B_1(f), d_{L^2}, \epsilon)| \leq e^{K\epsilon^{-1}}$ and the balls $B_{\epsilon}(g_u)$ covers $B_1(f) \cap BV((0,1), H_0)$ . For $\delta > 0$ , we define $\tilde{g}_u = f^{(k)} + \delta (g_u - f^{(k)})$ . Then the balls $B_{\delta \epsilon}(\tilde{g}_u)$ covers $B_{\delta}(f) \cap BV((0,1), H_0)$ . Hence + +$$ +\int_ {0} ^ {1} \sqrt {1 + \log \mathcal {N} (B _ {\delta} (m _ {G} (x)) \cap B V ((0 , 1) , H _ {0}) , d _ {L ^ {2}} , \delta \epsilon)} d \epsilon \leq \int_ {0} ^ {1} \sqrt {1 + K \epsilon^ {- 1}} d \epsilon \leq 1 + 2 \sqrt {K} < \infty . +$$ + +For the last assumption, just note that + +$$ +M ^ {(k)} (g, x) - M ^ {(k)} \left(f ^ {(k)}, x\right) = \| f ^ {(k)} - g \| _ {2} ^ {2}. +$$ + +Then according to Theorem 2 in [26], $\| \widehat{f}^{(k)} - f^{(k)}\| _2 = O_p(n_k^{-1 / 2})$ + +# C.3 Proof of Theorem 1 + +To prove Theorem 1, we first establish a lemma that quantifies the bias introduced in Step 1 of Algorithm 1. This lemma is formulated for a general metric space, making it broadly applicable and potentially useful for extending transfer learning algorithms to other metric spaces. + +Here are some notations, most of which are similar to our original setting. The only difference is that we use $Y$ instead of $\nu$ to represent a random object in a general metric space $(\Omega, d)$ . We define $n_* = \min_{1 \leq k \leq K} n_k$ , $n_{\mathcal{A}} = \sum_{i=1}^{K} n_i$ , $\alpha_k = n_k / (n_0 + n_{\mathcal{A}})$ , + +$$ +m_{1}(x) = \operatorname *{arg min}_{\omega \in \Omega}M^{1}(\omega ,x),\quad M^{1}(\omega ,x) = \sum_{k = 0}^{K}\alpha_{k}E\{s_{G}^{(k)}(x)d^{2}(\omega ,Y_{i}^{(k)})\} , +$$ + +and + +$$ +\widehat {m} _ {1} (x) = \underset {\omega \in \Omega} {\arg \min } M _ {n} ^ {1} (\omega , x), \quad M _ {n} ^ {1} (\omega , x) = \frac {1}{n _ {0} + n _ {\mathcal {A}}} \sum_ {k = 0} ^ {K} \sum_ {i = 1} ^ {n _ {k}} s _ {i G} ^ {(k)} (x) d ^ {2} (\omega , Y _ {i} ^ {(k)}). +$$ + +We impose the following mild condition on the metric space $(\Omega, d)$ , which is widely used in the literature on non-Euclidean data analysis [26]. This condition holds for various metric spaces commonly encountered in real-world applications, including the Wasserstein space, the space of networks, and the space of symmetric positive-definite matrices, among others. + +Condition 4. (i) $m_{1}(x)$ uniquely exists and $\widehat{m}_1(x)$ uniquely exists almost surely. Additionally for any $\gamma >0,\inf_{d(m_1(x),\omega) > \gamma}M^1 (\omega ,x) > M^1 (m_1(x),x)$ + +(ii) Let $B_{\delta}(m_1(x)) \subset \Omega$ be the ball of radius $\delta$ centered at $m_1(x)$ . Then + +$$ +J (\delta) := \int_ {0} ^ {1} \sqrt {1 + \log \mathcal {N} (B _ {\delta} (m _ {1} (x)) , d , \delta \epsilon)} d \epsilon = O (1) +$$ + +$$ +a s \delta \rightarrow 0. +$$ + +(iii) There exist $\eta > 0$ , $C > 0$ and $\beta > 1$ , possibly depending on $x$ , such that whenever $d(m_1(x), \omega) < \eta$ , we have $M^1(\omega, x) - M_n^1(m_1(x), x) \geq C d^\beta(\omega, m_1(x))$ . + +Lemma 3. Under Conditions 1, 2 and 4, $d^{2}(\widehat{m}_{1}(x), m_{1}(x)) = O_{p}\Big(\left(\frac{\sum_{k=0}^{K} \sqrt{n_{k}}}{n_{0} + n_{\mathcal{A}}} + (n_{0} + n_{\mathcal{A}})^{-1/2}\right)^{\frac{1}{\beta-1}}\Big)$ . + +# C.3.1 Proof of Lemma 3 + +Proof. Denote $V_{n}(\omega, x) = M_{n}^{1}(\omega, x) - M^{1}(\omega, x)$ , and $D_{i}^{(k)}(\omega, x) = d^{2}(Y_{i}^{(k)}, \omega) - d^{2}(Y_{i}^{(k)}, m_{1}(x))$ and recall $s_{i}^{(k)}(x) = 1 + (X_{i}^{(k)} - \theta_{k})\Sigma_{k}^{-1}(x - \theta_{k})$ . Then + +$$ +\begin{array}{l} | V _ {n} (\omega , x) - V _ {n} (m _ {1} (x), x) | \leq \sum_ {k = 0} ^ {K} \alpha_ {k} \{| \frac {1}{n _ {k}} \sum_ {i = 1} ^ {n _ {k}} (s _ {i G} ^ {(k)} (x) - s _ {i} ^ {(k)} (x)) D _ {i} ^ {(k)} (\omega) | \\ + \left| \frac {1}{n _ {k}} \sum_ {i = 1} ^ {n _ {k}} s _ {i} ^ {(k)} D _ {i} ^ {(k)} (\omega) - E \left\{s _ {i} ^ {(k)} D _ {i} ^ {(k)} (\omega) \right\} \right|. \tag {A.1} \\ \end{array} +$$ + +For any $\delta > 0$ , + +$$ +\sup _ {d (\omega , m _ {1} (x)) < \delta} \left| \frac {1}{n _ {k}} \sum_ {i = 1} ^ {n _ {k}} (s _ {i G} ^ {(k)} (x) - s _ {i} ^ {(k)} (x)) D _ {i} ^ {(k)} (\omega) \right| \leq \frac {2 \mathrm {d i a m} (\Omega) \delta}{n _ {k}} \sum_ {i = 1} ^ {n _ {k}} \left| W _ {0} ^ {(k)} (x) + W _ {1} ^ {(k)} (x) ^ {\mathrm {T}} X _ {i} ^ {(k)} \right|, +$$ + +where $W_0^{(k)}(x) \coloneqq (\overline{X}^{(k)})^{\mathrm{T}}\Sigma_k^{-1}(x - \overline{X}^{(k)}) - \theta_k^{\mathrm{T}}\Sigma_k^{-1}(x - \theta_k)$ and $W_{1}^{(k)}(x)\coloneqq \Sigma_{k}^{-1}(x - \theta_{k}) - \widehat{\Sigma}_{k}(x - \overline{X}^{(k)})$ . Then + +$$ +s _ {i G} ^ {(k)} (x) - s _ {i} ^ {(k)} (x) = W _ {0} ^ {(k)} (x) + \left(W _ {1} ^ {(k)} (x)\right) ^ {\mathrm {T}} X _ {i} ^ {(k)}. \tag {A.2} +$$ + +We neglect the superscript for the sake of notation simplicity. + +Denote $B_{1,M} \coloneqq \{\| \frac{1}{n}\sum_{i = 1}^{n}|X_i||\leq M\}$ where $|X_{i}|$ represents the element-wise absolute value of $X_{i}$ . Let $B_{2,M} = \{\| \widehat{\Sigma}^{-1}\| \leq M\}$ , $B_{M} = B_{1,M} \cap B_{2,M}$ and $\widehat{\Sigma}' = \frac{1}{n}\sum (X_i - \theta)(X_i - \theta)^{\mathrm{T}}$ . In $B_{1,M}$ , + +$$ +\begin{array}{l} \left\| \widehat {\Sigma} - \widehat {\Sigma} ^ {\prime} \right\| \leq \left\| \widehat {\Sigma} - \widehat {\Sigma} ^ {\prime} \right\| _ {F} \\ = \left\| \frac {1}{n} \sum_ {i = 1} ^ {n} [ (X _ {i} - \bar {X}) (X _ {i} - \bar {X _ {i}}) ^ {\mathrm {T}} - (X _ {i} - \theta) (X _ {i} - \theta) ^ {\mathrm {T}} ] \right\| _ {F} \\ \leq \left[ 2 p ^ {2} \| \overline {{X}} ^ {2} - \theta^ {2} \| _ {2} ^ {2} + M ^ {2} \| \overline {{X}} - \theta \| _ {2} ^ {2} \right] ^ {1 / 2} \\ \leq 2 p (M + | \theta |) \| \bar {X} - \theta \| _ {2}. \\ \end{array} +$$ + +Using Theorem 6.5 in [37] leads to + +$$ +P (\| \widehat {\Sigma} - \Sigma \| > \epsilon) \leq 2 e ^ {- C _ {1} \epsilon^ {2} n} + C _ {2} e ^ {- C _ {3} (\frac {1}{2} \epsilon - \sqrt {\frac {C _ {4}}{n}}) ^ {2} n}, +$$ + +for any $\epsilon \in (2\sqrt{\frac{C_4}{n}}, 1)$ . Note that + +$$ +\begin{array}{l} \| \widehat {\Sigma} ^ {- 1} \| \leq \| \Sigma^ {- 1} \| + \| \widehat {\Sigma} ^ {- 1} - \Sigma^ {- 1} \| \\ \leq \| \Sigma^ {- 1} \| + \frac {\left| \gamma_ {1} (\Sigma) - \gamma_ {1} (\widehat {\Sigma}) \right|}{\gamma_ {1} (\Sigma) \gamma_ {1} (\widehat {\Sigma})} \\ \leq \| \Sigma^ {- 1} \| + \frac {\| \Sigma - \widehat {\Sigma} \|}{\gamma_ {1} (\Sigma) \left(\gamma_ {1} (\Sigma) - \| \Sigma - \widehat {\Sigma} \|\right)}, \\ \end{array} +$$ + +where the third inequality is from the Weyl's inequality. Then there exists $M > 0$ such that + +$$ +P (\| \widehat {\Sigma} ^ {- 1} \| \geq M) \leq 2 e ^ {- C _ {1} \frac {4}{R _ {3} ^ {2}} n} + C _ {2} e ^ {(- C _ {3} (\frac {2}{R _ {3}} - \sqrt {\frac {C _ {4}}{n}}) n)}. +$$ + +Consequently, + +$$ +P (B _ {M} ^ {c}) \leq 3 e ^ {- \frac {4 C _ {1}}{R _ {3} ^ {2}} n} + C _ {2} e ^ {(- C _ {3} (\frac {2}{R _ {3}} - \sqrt {\frac {C _ {4}}{n}}) n)}. +$$ + +In $B_{M}$ + +$$ +\begin{array}{l} W _ {0} (x) = \left| (\bar {X} - \theta) \Sigma^ {- 1} x - \theta \Sigma^ {- 1} \left(\theta - \bar {X}\right) - \bar {X} \Sigma^ {- 1} \left(\theta - \bar {X}\right) \right| \\ \leq | (\bar {X} - \theta) \Sigma^ {- 1} (| x | + | \theta | + M) |. \\ \end{array} +$$ + +Hence in $B_M^\prime \coloneqq \cap_{k = 0}^K B_M^{(k)}$ , we have + +$$ +\begin{array}{l} P \left(\sum_ {k = 0} ^ {K} \alpha_ {k} | W _ {0} (x) | > t\right) \leq 2 e ^ {- \frac {C _ {5} t ^ {2}}{\sum_ {k = 0} ^ {K} \alpha_ {k} ^ {2} \| W _ {0} (x) \| _ {\Psi_ {2}} ^ {2}}} \\ \leq 2 e ^ {- \frac {C _ {6} t ^ {2}}{\sum_ {k = 0} ^ {K} \frac {n _ {k}}{(n _ {0} + n _ {\mathcal {A}}) ^ {2}}} \leq 2 e ^ {- C _ {6} t ^ {2} (n _ {0} + n _ {\mathcal {A}})}, \tag {A.3} \\ \end{array} +$$ + +for some constants $C_5, C_6 > 0$ . + +In the first inequality, we use general Hoeffding's inequality (Theorem 2.6.2 in [35]) and in the second inequality, we use Proposition 2.6.1 in [35]. + +In addition, + +$$ +\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} | (W _ {1} (x)) ^ {T} X _ {i} | \leq M [ \| \Sigma^ {- 1} - \widehat {\Sigma} ^ {- 1} \| \| x \| - \Sigma^ {- 1} (\theta - \overline {{X}}) + (\Sigma^ {- 1} - \widehat {\Sigma} ^ {- 1}) \overline {{X}} ] \\ \leq M (\| x \| + M) \left(\| \Sigma^ {- 1} - \widehat {\Sigma} ^ {- 1} \|\right) + M R _ {3} | \theta - \bar {X} | \\ \leq M ^ {2} (\| x \| + M) R _ {3} \| \Sigma - \widehat {\Sigma} \| + M R _ {3} | \theta - \bar {X} |. \\ \end{array} +$$ + +Hence in $B_M^\prime$ + +$$ +P \left(\sum_ {k = 0} ^ {K} \alpha_ {k} \frac {1}{n _ {k}} \sum_ {i = 1} ^ {n _ {k}} \left| \left(X _ {i} ^ {(k)}\right) ^ {T} W _ {1} ^ {(k)} (x) \right| > t\right) \leq 2 e ^ {- \frac {C _ {8} t ^ {2}}{n _ {0} + n _ {\mathcal {A}}}} + P \left(\sum_ {k = 0} ^ {K} \alpha_ {k} C _ {9} \| \Sigma_ {k} - \widehat {\Sigma} _ {k} \| > t\right) \tag {A.4} +$$ + +for some constant $C_8, C_9 > 0$ . Using Markov inequality, the second term is bounded by + +$$ +e ^ {- \lambda t} \Pi_ {k = 0} ^ {K} E e ^ {\lambda \alpha_ {k} C _ {9} \| \Sigma_ {k} - \widehat {\Sigma} _ {k} \|} \leq e ^ {- \lambda t} e ^ {4 p K + \sum_ {k = 0} ^ {K} C _ {1 0} \frac {\alpha_ {k} ^ {2} \lambda^ {2}}{n _ {k}}}, +$$ + +which is from Theorem 6.5 in [37] for any $\lambda$ satisfying $\lambda \leq C_{11}n_*$ for some constants $C_{10}, C_{11} > 0$ . We can choose $\lambda := \frac{t(n_0 + n_A)}{2C_{10}}$ , which does satisfy the theorem's assumption if $t = O\left((n_0 + n_A)^{-1/2}\right)$ . Here we utilize Condition 2. Then + +$$ +P \left(\sum_ {k = 0} ^ {K} \alpha_ {k} C _ {9} \| \Sigma_ {k} - \widehat {\Sigma} _ {k} \| > t\right) \leq e ^ {- \frac {t ^ {2} \left(n _ {0} + n _ {\mathcal {A}}\right)}{4 C _ {1 2}} + 4 d K}. \tag {A.5} +$$ + +Note that $K = o(n_0 + n_{\mathcal{A}})$ , hence + +$$ +\sup _ {d (\omega , m _ {1} (x)) < \delta} \sum_ {k = 0} ^ {K} \alpha_ {k} \left[ \left| \frac {1}{n _ {k}} \sum (s _ {i G} ^ {(k)} (x) - s _ {i} ^ {(k)} (x)) D _ {i} ^ {k} (\omega) \right| \right] = O _ {p} \left(\delta (n _ {0} + n _ {\mathcal {A}}) ^ {- 1 / 2}\right), +$$ + +since $P(B_M) \to 1$ due to Condition 2. + +To bound the second term of (A.1), following the proof of Theorem 2 in [26] for each $k$ , we define the functions $g_{\omega}^{(k)}: \mathbb{R}^p \times \Omega \mapsto \mathbb{R}$ as + +$$ +g _ {\omega} ^ {(k)} (z, y) = \left[ 1 + \left(z - \theta_ {k}\right) ^ {\mathrm {T}} \Sigma_ {k} ^ {- 1} (x - \theta_ {k}) \right] d ^ {2} (y, \omega) +$$ + +and the function class + +$$ +\mathcal {M} _ {\delta} ^ {(k)} := \left\{g _ {\omega} ^ {(k)} - g _ {m _ {1} (x)} ^ {(k)}: d (\omega , m _ {1} (x)) < \delta \right\}. +$$ + +We have + +$$ +E \left\{\sup _ {d (\omega , m _ {1} (x)) < \delta} \left| \frac {1}{n _ {k}} \sum_ {i = 1} ^ {n _ {k}} s _ {i G} ^ {(k)} (x) D _ {(i)} ^ {k} (g) - E \{s _ {i} ^ {(k)} D _ {i} ^ {(k)} (g) \} \right| \right\} \leq J \left(E [ (G _ {\delta} ^ {(k)} (x)) ^ {2} ] ^ {\frac {1}{2}} \sqrt {n _ {k}}, \right. +$$ + +where $G_{\delta}^{(k)}(z) \coloneqq 2\mathrm{diam}(\Omega)\delta [1 + (z - \theta_k)^{\mathrm{T}}\Sigma_k^{-1}(x - \theta_k)]$ is the envelop function. + +Note that + +$$ +E (G _ {\delta} ^ {(k)} (X) ^ {2}) \leq C _ {1 4} \delta^ {2} +$$ + +for some constant $C_{14} > 0$ , which does not depend on $k$ . Hence + +$$ +E \left\{\sup _ {d (\omega , m _ {1} (x)) < \delta} \left| \sum_ {k = 0} ^ {K} \frac {1}{n _ {0} + n _ {\mathcal {A}}} \sum_ {i = 1} ^ {n _ {k}} s _ {i G} ^ {(k)} (x) D _ {i} ^ {(k)} (g) - E \left\{s _ {i} ^ {(k)} D _ {i} ^ {(k)} (g) \right\} \right| \right\} = O \left(\delta \sum_ {k = 0} ^ {K} \frac {\sqrt {n _ {k}}}{n _ {0} + n _ {\mathcal {A}}}\right). \tag {A.6} +$$ + +Define + +$$ +D _ {R} := \left\{\sup _ {d (\omega , m _ {1} (x)) < \delta} \sum_ {k = 0} ^ {K} \alpha_ {k} \left| \frac {1}{n _ {k}} \sum_ {i = 1} ^ {n _ {k}} \left(s _ {i G} ^ {(k)} (x) - s _ {i} ^ {(k)} (x)\right) D _ {i} ^ {(k)} (\omega) \right| \leq R \delta \left(n _ {0} + n _ {\mathcal {A}}\right) ^ {- 1 / 2} \right\}. +$$ + +We have + +$$ +E\bigl\{1_{D_{R}}\sup_{d(\omega ,m_{1}(x)) < \delta}|V_{n}(w) - V_{n}(m_{1}(x))|\bigr \} \leq aR\delta \Bigl(\sum_{k = 0}^{K}\frac{\sqrt{n_{k}}}{n_{0} + n_{\mathcal{A}}} +(n_{o} + n_{\mathcal{A}})^{-1 / 2}\Bigr), +$$ + +for some constant $a > 0$ + +Next we show + +$$ +\left| M ^ {1} (w, x) - M _ {n} ^ {1} (w, x) \right| = o _ {p} (1), +$$ + +and for all $\omega \in \Omega$ and for any $\kappa > 0, \varphi > 0$ , there exists $\delta > 0$ such that + +$$ +\lim _ {n} \sup _ {n} P (\sup _ {d \left(\omega_ {1}, \omega_ {2}\right) < \delta} \left| M _ {n} ^ {1} \left(\omega_ {1}, x\right) - M _ {n} ^ {1} \left(\omega_ {2}, x\right) \right| > \kappa) \leq \varphi . \tag {A.7} +$$ + +To prove the first assertion, we denote + +$$ +\tilde {M} _ {n} ^ {1} (\omega , x) = \frac {1}{n _ {0} + n _ {\mathcal {A}}} \sum_ {k = 0} ^ {K} \sum_ {i = 1} ^ {n _ {k}} s _ {i} ^ {(k)} (x) d ^ {2} (\omega , Y _ {i} ^ {(k)}). +$$ + +Then $E\tilde{M}_n^1 (\omega ,x) = M_n^1 (\omega ,x)$ and $\mathrm{Var}(\tilde{M}_n^1 (\omega ,x))\leq 2\sum_{k = 0}^{K}C'\frac{n_k}{(n_0 + n_A)^2}$ for some constant $C^\prime >0$ . Hence $\tilde{M}_n^1 (\omega ,x) - M_n^1 (\omega ,x) = o_p(1)$ + +Besides, + +$$ +\begin{array}{l} M _ {n} ^ {1} (\omega) - \tilde {M} _ {n} ^ {1} (\omega) = \sum_ {k = 0} ^ {K} \alpha_ {k} \frac {W _ {0} ^ {(k)}}{n _ {k}} \sum_ {i = 1} ^ {n _ {k}} d ^ {2} \left(Y _ {i} ^ {(k)}, \omega\right) + \sum_ {k = 0} ^ {K} \alpha_ {k} \frac {\left(W _ {1} ^ {(k)}\right) ^ {\mathrm {T}}}{n _ {k}} \sum_ {i = 1} ^ {n _ {k}} X _ {i} ^ {(k)} d ^ {2} \left(Y _ {i} ^ {(k)}, \omega\right) \\ = o _ {p} (1). \\ \end{array} +$$ + +The last equation is true since $\sum_{k=0}^{K} \alpha_k \frac{1}{n_k} \sum_{i=1}^{n_k} |(W_1(x))^{\mathrm{T}} X_i| = o_p(1)$ and $\sum_{k=0}^{K} \alpha_k |W_0(x)| = o_p(1)$ . + +To prove (A.7), note that for any $\gamma_1, \gamma_2 \in \Omega$ , + +$$ +\begin{array}{l} \left| M _ {n} ^ {1} \left(\gamma_ {1}, x\right) - M _ {n} ^ {1} \left(\gamma_ {1}, x\right) \right| \leq 2 \operatorname {d i a m} (\Omega) d \left(\gamma_ {1}, \gamma_ {2}\right) \frac {1}{n _ {0} + n _ {\mathcal {A}}} \sum_ {k = 0} ^ {K} \sum_ {i = 1} ^ {n _ {k}} \left| s _ {i} ^ {(k)} + W _ {0} ^ {(k)} + \left(W _ {1} ^ {(k)}\right) ^ {\mathrm {T}} X _ {i} \right| \\ = O _ {p} \left(d \left(\gamma_ {1}, \gamma_ {2}\right)\right). \\ \end{array} +$$ + +The last equation is true since $\sum_{k=0}^{K} \alpha_k \frac{1}{n_k} \sum_{i=1}^{n_k} |(W_1(x))^{\mathrm{T}} X_i| = o_p(1)$ and $\sum_{k=0}^{K} \alpha_k |W_0(x)| = o_p(1)$ , and we can prove + +$$ +\sum_ {k = 0} ^ {K} \alpha_ {k} \frac {1}{n _ {k}} \sum_ {i = 1} ^ {n _ {k}} \left| s _ {i} ^ {(k)} \right| = 1 + o _ {p} (1) \tag {A.8} +$$ + +in a similar way of (A.3). It follows that $d(\widehat{m}_1(x),m_1(x)) = o_p(1)$ + +$\mathbf{Set}r_n = \left(\sum_{k = 0}^K\frac{\sqrt{n_k}}{n_0 + n_A} +(n_0 + n_A)^{-1 / 2}\right)^{\frac{\beta}{2(\beta - 1)}}$ and + +$$ +S _ {j, n} (x) = \left\{\omega : 2 ^ {j - 1} < r _ {n} d (\omega , m _ {1} (x)) ^ {\frac {\beta}{2}} \leq 2 ^ {j} \right\}. +$$ + +Choose $\eta > 0$ to satisfy (iii) in Condition 4 and also small enough that (ii) in Condition 4 holds for all $\delta < \eta$ and set $\tilde{\eta} \coloneqq \eta^{\frac{\beta}{2}}$ . For any integer $L_{2}$ , + +$$ +\begin{array}{l} P \left(r _ {n} d ^ {\beta / 2} (\widehat {m} _ {1} (x), m _ {1} (x)) > 2 ^ {L _ {2}}\right) \leq P \left(D _ {R} ^ {c}\right) + P \left(2 d (\widehat {m} _ {1} (x), m _ {1} (x)) \geq \eta\right) \\ + \sum_{\substack{j\geq L_{2}\\ 2^{j}\leq r_{n}\tilde{\eta}}}P\Big(\{\sup_{\omega \in S_{j,n}}|V_{n}(\omega) - V_{n}(m_{1}(x))|\geq C\frac{2^{2(j - 1)}}{r_{n}^{2}}\} \cap D_{R}\Big). \\ \end{array} +$$ + +The second term converges to 0 since $d(\widehat{m}_1(x), m_1(x)) = o_p(1)$ . For each $j$ in the sum in the third term, we have $d(\omega, m_1(x)) \leq \left(\frac{2^j}{r_n}\right)^{\frac{2}{\beta}} \leq \eta$ , so the sum is bounded by + +$$ +4aC^{-1}\sum_{\substack{j\geq L_{2}\\ 2^{j}\leq r_{n}\tilde{\eta}}}\frac{2^{\frac{2j(1 - \beta)}{\beta}}}{r_{n}^{\frac{2(1 - \beta)}{\beta}}\Big(\sum_{k = 0}^{K}\frac{\sqrt{n_{k}}}{n_{0} + n_{\mathcal{A}}} + (n_{o} + n_{\mathcal{A}})^{-\frac{1}{2}}\Big)}\leq 4aC^{-1}\sum_{j\geq L_{2}}\Big(\frac{1}{4^{\frac{\beta - 1}{\beta}}}\Big)^{j}. +$$ + +Choose $L_{2}$ large enough, this probability can be small enough. + +Hence we have + +$$ +d ^ {2} (\widehat {m} _ {1} (x), m _ {1} (x)) = O _ {p} \left(\frac {\sum_ {k = 0} ^ {K} \sqrt {n _ {k}}}{n _ {0} + n _ {\mathcal {A}}} + (n _ {0} + n _ {\mathcal {A}}) ^ {- 1 / 2}\right) ^ {\frac {1}{\beta - 1}}. +$$ + +# C.3.2 Proof of Theorem 1 Given Lemma 3 + +Proof. The proof is similar to that of Lemma 1. First, it is easy to check that $f \in BV((0,1), H_2)$ for some large $H_2 > 0$ since + +$$ +f = E \left\{\sum_ {k = 0} ^ {K} \alpha_ {k} s _ {G} ^ {(k)} (x) 1 _ {\{\sum_ {k = 0} ^ {K} \alpha_ {k} s _ {G} ^ {(k)} (x) > 0 \}} F _ {\nu^ {(0)}} ^ {- 1} \right\} - E \left\{\left| \sum_ {k = 0} ^ {K} \alpha_ {k} s _ {G} ^ {(k)} (x) \right| 1 _ {\{\sum_ {k = 0} ^ {K} \alpha_ {k} s _ {G} ^ {(k)} (x) < 0 \}} F _ {\nu^ {(0)}} ^ {- 1} \right\} +$$ + +and $E|\sum_{k=0}^{K}\alpha_{k}s_{G}^{(k)}(x)| < \infty$ . Taking $H_{2}$ large enough, we also claim that $P(\widehat{f} \in BV((0,1),H_2)) \to 1$ since $\widehat{f}_0$ also has a similar decomposition and + +$$ +\begin{array}{l} \frac {1}{n _ {0} + n _ {\mathcal {A}}} \sum_ {k = 0} \sum_ {i = 1} ^ {n _ {k}} | s _ {i G} ^ {(k)} (x) | \leq \frac {1}{n _ {0} + n _ {\mathcal {A}}} \sum_ {i = 1} ^ {n _ {k}} (| s _ {i G} ^ {(k)} (x) - s _ {i} ^ {(k)} (x) | + | s _ {i} ^ {(k)} (x) |) \\ \leq o _ {p} (1) + H _ {3}, \\ \end{array} +$$ + +for some $H_3 > 0$ with high probability. The last inequality follows from (A.2), (A.8), (A.5) and (A.4). Thus, (i) in Condition 4 holds. + +(ii) in Condition 4 follows from the same arguments used in the proof of Lemma 1. It remains to verify (iii) in Condition 4, which holds since + +$$ +M ^ {1} (g, x) - M ^ {1} (f, x) = \| f - g \| _ {2} ^ {2}. +$$ + +Hence the result follows by using Lemma 3. + +![](images/f48333bfc9772ae38b015e1a60af6c91705a8624c0a87e13f31f973e1dae7558.jpg) + +# C.4 Proof of Theorem 2 + +Proof. Recall that $f^{(0)}(x) = E\{s_G^{(0)}(x)F_{\nu^{(0)}}^{-1}\}$ . Using the definition of $\widehat{f}_0$ , + +$$ +\begin{array}{l} \| \widehat {f} _ {0} - f ^ {(0)} \| _ {2} ^ {2} = \frac {1}{n _ {0}} \sum_ {i = 1} ^ {n _ {0}} s _ {i G} ^ {(0)} (x) \big (\| \widehat {f} _ {0} - F _ {\nu_ {i} ^ {(0)}} ^ {- 1} \| _ {2} ^ {2} - \| f ^ {(0)} - F _ {\nu_ {i} ^ {(0)}} ^ {- 1} \| _ {2} ^ {2} \big) \\ + \frac {2}{n _ {0}} \sum_ {i = 1} ^ {n _ {0}} s _ {i G} ^ {(0)} (x) \langle F _ {\nu_ {i} ^ {(0)}} ^ {- 1} - f ^ {(0)}, \widehat {f} _ {0} - f ^ {(0)} \rangle_ {2} \\ \leq - \lambda \| \widehat {f} _ {0} - \widehat {f} \| _ {2} + \lambda \| f ^ {(0)} - \widehat {f} \| _ {2} \\ + \frac {2}{n _ {0}} \sum_ {i = 1} ^ {n _ {0}} s _ {i G} ^ {(0)} (x) \langle F _ {\nu_ {i} ^ {(0)}} ^ {- 1} - f ^ {(0)}, \widehat {f} _ {0} - f ^ {(0)} \rangle_ {2} \\ \leq - \lambda \| \widehat {f} _ {0} - \widehat {f} \| _ {2} + \lambda \| f ^ {(0)} - \widehat {f} \| _ {2} \\ + 2 \| \widehat {f} _ {0} - f ^ {(0)} \| _ {2} \| \frac {1}{n _ {0}} \sum_ {i = 1} ^ {n _ {0}} s _ {i G} ^ {(0)} (x) F _ {\nu_ {i} ^ {(0)}} ^ {- 1} - f ^ {(0)} \| _ {2}. \\ \end{array} +$$ + +We define the event $E_{n} \coloneqq \{\| n_{0}^{-1}\sum_{i = 0}^{n_{0}}s_{iG}^{(0)}(x)F_{\nu_{i}^{0}}^{-1} - f^{(0)}\|_{2} \leq \lambda /2\}$ . According to Lemma 1, we have $P(E_n) \to 1$ since $\lambda \asymp n_0^{-1 / 2 + \epsilon}$ . + +Under $E_{n}$ for $n$ large enough, + +$$ +\| \widehat {f} _ {0} - f ^ {(0)} \| _ {2} ^ {2} \leq 2 \lambda \| f ^ {(0)} - \widehat {f} \| _ {2} \leq 2 \lambda (\| f ^ {(0)} - f \| _ {2} + \| \widehat {f} - f \| _ {2}) +$$ + +holds. Hence we have + +$$ +d _ {\mathcal {W}} ^ {2} \left(\widehat {m} _ {G} ^ {(0)} (x), m _ {G} ^ {(0)} (x)\right) \leq \| \widehat {f} _ {0} - f ^ {(0)} \| _ {2} ^ {2} = O _ {p} \left(n _ {0} ^ {- 1 / 2 + \epsilon} \left(\| \widehat {f} - f \| _ {2} + \psi\right)\right). +$$ + +Then using Theorem 1, it follows that + +$$ +d _ {\mathcal {W}} ^ {2} (\widehat {m} _ {G} ^ {(0)} (x), m _ {G} ^ {(0)} (x)) = O _ {p} \Big (n _ {0} ^ {- 1 / 2 + \epsilon} (\psi + \frac {\sum_ {k = 0} ^ {K} \sqrt {n _ {k}}}{n _ {0} + n _ {\mathcal {A}}} + (n _ {0} + n _ {\mathcal {A}}) ^ {- 1 / 2}) \Big). +$$ + +![](images/c06e3c8fb0e25ed2de59ca7b9dec711a617048f0ab9d6c2b1fa6208dbaa03bf0.jpg) + +# C.5 Proof of Theorem 3 + +Proof. We first prove $P(\hat{\mathcal{A}} = \mathcal{A})\to 1$ . Note that + +$$ +\begin{array}{l} P (\widehat {\mathcal {A}} = \mathcal {A}) \geq P \left(\max _ {k \in \mathcal {A}} \widehat {\psi} _ {k} < \min _ {k \in \mathcal {A} ^ {C}} \widehat {\psi} _ {k}\right) \\ \geq 1 - P \left(\max _ {1 \leq k \leq K} | \widehat {\psi} _ {k} - \psi_ {k} | > \left| \max _ {k \in \mathcal {A}} \psi_ {k} - \min _ {k \in \mathcal {A} ^ {C}} \psi_ {k} \right|\right) \\ \geq 1 - K \max _ {1 \leq k ^ {\prime} \leq K} P (| \widehat {\psi} _ {k ^ {\prime}} - \psi_ {k ^ {\prime}} | > | \max _ {k \in \mathcal {A}} \psi_ {k} - \min _ {k \in \mathcal {A} ^ {C}} \psi_ {k} |). \\ \end{array} +$$ + +According to Lemma 1, $|\psi_k - \widehat{\psi_k}| = O_p(n_k^{-1/2} + n_0^{-1/2})$ . According to Condition 3, + +$$ +\frac {\left| \operatorname* {m a x} _ {k \in \mathcal {A}} \psi_ {k} - \operatorname* {m i n} _ {k \in \mathcal {A} ^ {C}} \psi_ {k} \right|}{n _ {*} ^ {- 1 / 2} + n _ {0} ^ {- 1 / 2}} \to \infty . +$$ + +Hence + +$$ +P (\widehat {\mathcal {A}} = \mathcal {A}) = 1 - o (1). +$$ + +Then using Theorem 2 where we substitute $\mathcal{A}$ for $\{1,\dots ,K\}$ , we have + +$$ +d _ {\mathcal {W}} ^ {2} (\widehat {m} _ {G} ^ {(0)} (x), m _ {G} ^ {(0)} (x)) = O _ {p} \Big (n _ {0} ^ {- 1 / 2 + \epsilon} \big (\max _ {k \in \mathcal {A}} \psi_ {k} + \frac {\sum_ {k \in \mathcal {A} \cup \{0 \}} \sqrt {n _ {k}}}{\sum_ {k \in \mathcal {A} \cup \{0 \}} n _ {k}} + \big (\sum_ {k \in \mathcal {A} \cup \{0 \}} n _ {k} \big) ^ {- 1 / 2} \big) \Big). +$$ + +![](images/52751be58d3f132d16198cd54749d2ab9cf970af26fab85902ae03aeeeadcd10.jpg) + +# D Transfer Learning for Local Fréchet Regression + +For simplicity, we consider a scalar predictor $X^{(k)} \in \mathbb{R}$ for $k = 0, \dots, K$ . For predictors in $\mathbb{R}^p$ , the explicit form of the weight function can be found in Section 2.3 of [17]. Under the same source-target data setting, we define + +$$ +m^{(k)}(x) = \operatorname *{arg min}_{\mu \in \mathcal{W}}E\{d_{\mathcal{W}}^{2}(\nu^{(k)},\mu)|X^{(k)} = x\} , +$$ + +$$ +m _ {L, h} ^ {(k)} (x) = \underset {\mu \in \mathcal {W}} {\arg \min} E \{s _ {L} ^ {(k)} (x, h) d _ {\mathcal {W}} ^ {2} (\nu^ {(k)}, \mu) \}, +$$ + +$$ +\widehat {m} _ {L, h} ^ {(k)} (x) = \underset {\mu \in \mathcal {W}} {\arg \min} \frac {1}{n _ {k}} \sum_ {i = 1} ^ {n _ {k}} s _ {i L} ^ {(k)} (x, h) d _ {\mathcal {W}} ^ {2} (\nu_ {i} ^ {(k)}, \mu) \}, +$$ + +where $s_{L}^{(k)}(x)$ and $s_{iL}^{(k)}(x)$ represents the population and sample weight functions of local Fréchet regression for the $k$ th source. + +For notational simplicity, define + +$$ +f _ {\oplus} ^ {(k)} (x) = E \{F _ {\nu^ {(k)}} ^ {- 1} | X ^ {(k)} = x \}, +$$ + +$$ +f _ {h} ^ {(k)} (x) = E \{s _ {L} ^ {(k)} (x, h) F _ {\nu^ {(k)}} ^ {- 1} \}, +$$ + +$$ +f _ {h} (x) = \sum_ {k = 0} ^ {K} \alpha_ {k} f _ {h} ^ {(k)} (x). +$$ + +Similar to Section 3, we introduce the Local Wasserstein Transfer Learning (LWaTL) algorithm in Algorithm 3, assuming that all sources are informative. When the informative set is unknown, we incorporate an additional step to identify the informative set, as outlined in Algorithm 4. Theoretical guarantees for these algorithms are provided in Theorem 4 and Theorem 5. + +We impose the following condition, where the first two parts correspond to the kernel and distributional assumptions that are standard in local linear regression. + +Condition 5. (i) The kernel $K$ is a probability density function, symmetric around zero. Furthermore, defining $K_{sj} = \int_{\mathbb{R}} K^s(u) u^j du$ , $|K_{14}|$ and $|K_{26}|$ are both finite. + +(ii) The marginal density $q^{(k)}$ of $X^{(k)}$ and the conditional density of $X^{(k)}$ given $\nu^{(k)}$ , $g_{\nu^{(k)}}^{(k)}$ , exist and are twice continuously differentiable. Besides, it satisfies that + +$$ +\sup _ {0 \leq k \leq K} \sup _ {z, \nu^ {(k)}} \max \left\{\left(q ^ {(k)}\right) ^ {\prime \prime} (z), \left(g _ {\nu^ {(k)}} ^ {(k)}\right) ^ {\prime \prime} (z) \right\} < H _ {3} +$$ + +for some $H_3 > 0$ + +(iii) $h\to 0$ and $n_0h\to \infty$ + +$(iv)$ $\frac{\sum_{k = 1}^{K}n_k^{-1}}{h} = o(1).$ + +(v) $\nu^{(k)}$ have a common bounded domain. + +(vi) $u_{j}^{(k)}\coloneqq E\big(K_{h}(X_{i}^{(k)} - x)(X_{i}^{(k)} - x)^{j}\big), j = 0,1,2$ and $\sigma_0^{(k)}\coloneqq u_0^{(k)}u_2^{(k)} - (u_1^{(k)})^2 >0$ are bounded. + +# Algorithm 3 Local Wasserstein Transfer Learning (LWaTL) + +Input: Target and source data $\{(x_i^{(0)},\nu_i^{(0)})\}_{i = 1}^{n_0}\cup \big(\cup_{1\leq k\leq K}\{(x_i^{(k)},\nu_i^{(k)})\}_{i = 1}^{n_k}\big)$ , regularization parameter $\lambda$ , bandwidth $h$ and query point $x\in \mathbb{R}$ . + +Output: Target estimator $\widehat{m}_{L,h}^{(0)}(x)$ + +1: Weighted auxiliary estimator + +$$ +\widehat {f} _ {h} (x) = \frac {1}{n _ {0} + n _ {\mathcal {A}}} \sum_ {k = 0} ^ {K} n _ {k} \widehat {f} _ {h} ^ {(k)} (x), +$$ + +where $\widehat{f}_h^{(k)}(x) = n_k^{-1}\sum_{i = 1}^{n_k}s_{iL}^{(k)}(x,h)F_{\nu_i^{(k)}}^{-1}$ and $n_{\mathcal{A}} = \sum_{k = 1}^{K}n_{k}$ . + +2: Bias correction using target data + +$$ +\widehat{f}_{0h}(x) = \operatorname *{arg min}_{g\in L^{2}(0,1)}\frac{1}{n_{0}}\sum_{i = 1}^{n_{0}}s_{iL}^{(0)}(x,h)\| F_{\nu_{i}^{(0)}}^{-1} - g\|_{2}^{2} + \lambda \| g - \widehat{f}_{h}(x)\|_{2}. +$$ + +3: Projection to Wasserstein space + +$$ +\hat{m}^{(0)}_{L,h}(x) = \operatorname *{arg min}_{\mu \in \mathcal{W}}\bigl\|F^{-1}_{\mu} - \widehat{f}_{0h}(x)\bigr\|_{2}. +$$ + +The following theorem establishes the rate of convergence for the LWaTL algorithm. + +Theorem 4. Assume Condition 5 holds and the regularization parameter satisfies $\lambda \asymp n_0^{-1/2} h^{-1/2 - \epsilon}$ for some $\epsilon > 0$ . Then, for the LWaTL algorithm and a fixed $x \in \mathbb{R}^p$ , we have + +$$ +d _ {\mathcal {W}} ^ {2} (\widehat {m} _ {L, h} ^ {(0)} (x), m ^ {(0)} (x)) = O _ {p} \bigg (h ^ {4} + n _ {0} ^ {- 1 / 2} h ^ {- 1 / 2 - \epsilon} \Big [ \psi_ {L} + h ^ {2} + h ^ {- 1 / 2} \Big ((\sum_ {k = 0} ^ {K} n _ {k} ^ {- 1}) ^ {1 / 2} + \sum_ {k = 0} ^ {K} \frac {\sqrt {n _ {k}}}{n _ {0} + n _ {\mathcal {A}}} \Big) \Big ] \bigg), +$$ + +where $\psi_L = \max_{1\leq k\leq K}\| f_{\oplus}^{(0)}(x) - f_{\oplus}^{(k)}(x)\|$ . + +Proof. Note that + +$$ +\begin{array}{l} d _ {\mathcal {W}} ^ {2} (\widehat {m} _ {L, h} ^ {(0)} (x), m ^ {(0)} (x)) \leq 2 \{d _ {\mathcal {W}} ^ {2} (m _ {L, h} ^ {(0)} (x), m ^ {(0)} (x)) + d _ {\mathcal {W}} ^ {2} (m _ {L, h} ^ {(0)} (x), \widehat {m} _ {L, h} ^ {(0)} (x)) \} \\ \leq 2 \left\{d _ {\mathcal {W}} ^ {2} \left(m _ {L, h} ^ {(0)} (x), m ^ {(0)} (x)\right) + \left\| f _ {h} ^ {(0)} (x) - \widehat {f} _ {0 h} (x) \right\| _ {2} ^ {2} \right\}. \\ \end{array} +$$ + +For the first term, We could use Theorem 3 of [26] since we can check all its assumptions easily, which is similar to the proof of Lemma 1. Here we should consider the metric space $BV((0,1),H_4)$ for some $H_4 > 0$ endowed with the probability measure naturally induced by the canonical embedding of the Wasserstein space. + +For the second term, first note that using Theorem 4 of [26], we have $\| f_h^{(0)}(x) - \widehat{f}_h^{(0)}(x)\|_2 = O_p((n_0h)^{-1/2})$ . Then similar to the proof of Theorem 2, we have + +$$ +\begin{array}{l} \left\| f _ {h} ^ {(0)} (x) - \widehat {f} _ {0 h} (x) \right\| _ {2} ^ {2} \leq - \lambda \left\| \widehat {f} _ {0 h} (x) - \widehat {f} _ {h} (x) \right\| _ {2} + \lambda \left\| f _ {h} ^ {(0)} (x) - \widehat {f} _ {h} (x) \right\| _ {2} \\ + 2 \| \widehat {f} _ {h} ^ {(0)} (x) - f _ {h} ^ {(0)} (x) \| _ {2} \| \widehat {f} _ {0 h} (x) - f _ {h} ^ {(0)} (x) \| _ {2}. \\ \end{array} +$$ + +Hence in $E_{n}\coloneqq \{\| f_{h}^{(0)}(x) - \widehat{f}_{h}^{(0)}(x)\|_{2}\leq \frac{1}{2}\lambda \}$ + +$$ +\left\| f _ {h} ^ {(0)} (x) - \widehat {f} _ {0 h} (x) \right\| _ {2} ^ {2} \leq 2 \lambda \left\| f _ {h} ^ {(0)} (x) - \widehat {f} _ {h} (x) \right\| _ {2} \leq 2 \lambda \left(\psi_ {h} + \left\| f _ {h} (x) - \widehat {f} _ {h} (x) \right\| _ {2}\right), \tag {A.9} +$$ + +where $\psi_h = \max_{1\leq k\leq K}\| f_h^{(k)} - f_h^{(0)}\| _2$ . For the first term of (A.9), using Theorem 3 of [26] and (i), (ii) in Condition 5, we have + +$$ +\psi_ {h} = \psi_ {L} + O (h ^ {2}). +$$ + +For the second term of (A.9), we could follow the proof of Lemma 3 to study the asymptotic rate of convergence. The difference is that we do not assume covariates are sub-Gaussian but + +# Algorithm 4 Adaptive Local Wasserstein Transfer Learning (ALWaTL) + +Input: Target and source data $\{(x_i^{(0)},\nu_i^{(0)})\}_{i = 1}^{n_0}\cup \big(\cup_{1\leq k\leq K}\{(x_i^{(k)},\nu_i^{(k)})\}_{i = 1}^{n_k}\big)$ , regularization parameter $\lambda$ , bandwidth $h$ , number of informative sources $L$ , and query point $x\in \mathbb{R}$ . + +Output: Target estimator $\widehat{m}_{L,h}^{(0)}(x)$ + +1: Compute discrepancy scores. For each source dataset $k = 1, \ldots, K$ , compute the empirical discrepancy + +$$ +\widehat {\psi} _ {k, h} = \left\| \widehat {f} _ {h} ^ {(0)} (x) - \widehat {f} _ {h} ^ {(k)} (x) \right\| _ {2}, +$$ + +where $\widehat{f}_h^{(k)}(x) = n_k^{-1}\sum_{i = 1}^{n_k}s_{iL}^{(k)}(x,h)F_{\nu_i^{(k)}}^{-1}$ . Construct the adaptive informative set by selecting the $L$ smallest discrepancy scores + +$$ +\widehat {\mathcal {A}} = \{1 \leq k \leq K: \widehat {\psi} _ {k, h} \text {i s a m o n g t h e s m a l l e s t L v a l u e s} \}. +$$ + +2: Weighted auxiliary estimator + +$$ +\widehat {f} _ {h} (x) = \frac {1}{\sum_ {k \in \widehat {\mathcal {A}} \cup \{0 \}} n _ {k}} \sum_ {k \in \widehat {\mathcal {A}} \cup \{0 \}} n _ {k} \widehat {f} _ {h} ^ {(k)} (x). +$$ + +3: Bias correction using target data + +$$ +\widehat {f} _ {0 h} (x) = \underset {g \in L ^ {2} (0, 1)} {\arg \min } \frac {1}{n _ {0}} \sum_ {i = 1} ^ {n _ {0}} s _ {i L} ^ {(0)} (x, h) \| F _ {\nu_ {i} ^ {(0)}} ^ {- 1} - g \| _ {2} ^ {2} + \lambda \| g - \widehat {f} _ {h} (x) \| _ {2}. +$$ + +4: Projection to Wasserstein space + +$$ +\widehat{m}^{(0)}_{L,h}(x) = \operatorname *{arg min}_{\mu \in \mathcal{W}}\bigl\|F^{-1}_{\mu} - \widehat{f}_{0h}(x)\bigr\|_{2}. +$$ + +we have upper bounds for moments related to covariates and the kernel. In detail, we define $\tilde{s}_{iL}^{(k)}(x,h) := K_h(X_i^{(k)} - x)\frac{u_0^{(k)} - u_1^{(k)}(X_i^{(k)} - x)}{(\sigma_0^{(k)})^2}$ and $D_{i}^{(k)}(g,x) \coloneqq \| F_{\nu_{i}^{(k)}}^{-1} - g\|_{2}^{2} - \| F_{\nu_{i}^{(k)}}^{-1} - f_{h}^{(k)}(x)\|_{2}^{2}$ . Then similar to (A.6) and the proof of Theorem 4 in [26] we have for small $\delta > 0$ + +$$ +\begin{array}{l} E \Big \{\underset {d _ {L ^ {2}} (g, f _ {h} ^ {(k)}) < \delta} {\sup} \Big | \frac {1}{n _ {0} + n _ {\mathcal {A}}} \sum_ {k = 0} ^ {K} \sum_ {i = 1} ^ {n _ {k}} \tilde {s} _ {i L} ^ {(k)} (x, h) D _ {i} ^ {(k)} (g, x) - \sum_ {k = 0} ^ {K} \alpha_ {k} E \{\tilde {s} _ {i L} ^ {(k)} (x, h) D _ {i} ^ {(k)} (g, x) \} \Big | \Big \} \\ = O \Big (\delta h ^ {- 1 / 2} \frac {\sum_ {k = 0} ^ {K} \sqrt {n _ {k}}}{n _ {0} + n _ {\mathcal {A}}} \Big). \\ \end{array} +$$ + +Another term we need to bound is $\sum_{k=0}^{K} \sum_{i=1}^{n_k} |s_{iL}^{(k)}(x,h) - \bar{s}_{iL}^{(k)}(x,h)|$ . Note that we can define $B_M := \cap_{k=0}^{K} (\{|\widehat{\sigma}_0^{(k)})^2| > \frac{1}{2} (\sigma_0^{(k)})^2\} \cap_{j=0}^{2} \left( \left\{ \frac{1}{n_k} \sum_{i=1}^{n_k} |K_h(X_i^{(k)} - x)(X_i^{(k)} - x)^j| < h^j M \right\} \right)$ for some large $M > 0$ . Then it is easy to check that + +$$ +P \left(\left(B _ {M}\right) ^ {c}\right) \leq \sum_ {k = 0} ^ {K} \frac {C _ {A 1}}{n _ {k}}, +$$ + +for some $C_{A1} > 0$ . In $B_M$ , observe that + +$$ +| s _ {i L} ^ {(k)} (x, h) - \tilde {s} _ {i L} ^ {(k)} (x, h) | \leq | W _ {0 n} ^ {(k)} K _ {h} (X _ {i} ^ {(k)} - x) | + | W _ {1 n} ^ {(k)} K _ {h} (X _ {i} ^ {(k)} - x) (X _ {i} ^ {(k)} - x) |, +$$ + +where + +$$ +W _ {0 n} ^ {(k)} := \frac {\widehat {u} _ {2} ^ {(k)}}{(\widehat {\sigma} _ {0} ^ {(k)}) ^ {2}} - \frac {u _ {2} ^ {(k)}}{(\sigma_ {0} ^ {(k)}) ^ {2}}, \quad W _ {1 n} ^ {(k)} := \frac {\widehat {u} _ {1} ^ {(k)}}{(\widehat {\sigma} _ {0} ^ {(k)}) ^ {2}} - \frac {u _ {1} ^ {(k)}}{(\sigma_ {0} ^ {(k)}) ^ {2}}. +$$ + +Note that in $B_{M}$ , it is easy to check + +$$ +\left| W _ {0 n} ^ {(k)} \right| \leq C _ {A 2} \left(\sum_ {j = 0} ^ {2} h ^ {- j} \left| u _ {j} ^ {(k)} - \widehat {u} _ {j} ^ {(k)} \right|\right) +$$ + +and + +$$ +\left| W _ {1 n} ^ {(k)} \right| \leq C _ {A 2} \left(\sum_ {j = 0} ^ {2} h ^ {- 1 - j} \left| u _ {j} ^ {(k)} - \widehat {u} _ {j} ^ {(k)} \right|\right) +$$ + +for some constants $C_{A2} > 0$ . Hence in $B_M$ + +$$ +\begin{array}{l} P \left(\sum_ {k = 0} ^ {K} \sum_ {i = 1} ^ {n _ {k}} \left| s _ {i L} ^ {(k)} (x, h) - \tilde {s} _ {i L} ^ {(k)} (x, h) \right| > t\right) \\ \leq P \left(C _ {A 3} \sum_ {k = 0} ^ {K} \alpha_ {k} \left(\left| W _ {0 n} \right| + h \left| W _ {1 n} \right|\right) > t\right) \\ \leq \frac {1}{t ^ {2} h} C _ {A 4} \sum_ {k = 0} ^ {K} \frac {1}{n _ {k}}. \\ \end{array} +$$ + +Hence $\sum_{k=0}^{K}\sum_{i=1}^{n_k}|s_{iL}^{(k)}(x,h) - \tilde{s}_{iL}^{(k)}(x,h)| = O_p(h^{-1/2}\sum_{k=0}^{K}n_k^{-1})$ . In addition, we can check that $\widehat{f_h}(x) \in BV((0,1), H_5)$ exists almost surely for some $H_5 > 0$ by showing $\frac{1}{n}\sum_{k=1}^{K}\sum_{i=1}^{n_k}|s_{iL}^{(k)}(x,h)| = O_p(1)$ . Also utilizing the same technique in the proof of Lemma 3, It suffices to show that $\|f_h(x) - \widehat{f_h}(x)\|_2 = o_p(1)$ . It is just a simple generalization of Lemma 2 in [26]. Eventually, we have + +$$ +\left\| f _ {h} ^ {(0)} (x) - \widehat {f} _ {0 h} (x) \right\| _ {2} = O _ {p} \left(h ^ {- 1 / 2} \left(\left(\sum_ {k = 0} ^ {K} n _ {k} ^ {- 1}\right) ^ {1 / 2} + \sum_ {k = 0} ^ {K} \frac {\sqrt {n _ {k}}}{n _ {0} + n _ {\mathcal {A}}}\right)\right) +$$ + +and + +$$ +d _ {\mathcal {W}} ^ {2} (\widehat {m} _ {L, h} ^ {(0)} (x), m ^ {(0)} (x)) = O _ {p} \left(h ^ {4} + n _ {0} ^ {- 1 / 2} h ^ {- 1 / 2 - \epsilon} \Big [ \psi_ {L} + h ^ {2} + h ^ {- 1 / 2} \Big (\big (\sum_ {k = 0} ^ {K} n _ {k} ^ {- 1} \big) ^ {1 / 2} + \sum_ {k = 0} ^ {K} \frac {\sqrt {n _ {k}}}{n _ {0} + n _ {\mathcal {A}}} \big) \Big ]\right). +$$ + +![](images/9269869f81c287c5e9ea02682232516fcefdb85f8b1c0195aaa2a7551dd25712.jpg) + +Remark 3. Theorem 4 can be extended to general metric spaces and serves as a parallel version of Lemma 3. Besides, if $n_*$ is much larger than $n_0$ and $\psi_L = O(n_0^{-k})$ with $k > \frac{6}{15 - 10\epsilon}$ , then among bandwidth sequences $h = n_0^{-r}$ , the optimal sequence is achieved at $r^* = \frac{k}{2}$ , leading to the convergence rate + +$$ +d _ {\mathcal {W}} ^ {2} (\widehat {m} _ {L, h} ^ {(0)} (x), m ^ {(0)} (x)) = O _ {p} \left(n _ {0} ^ {- \frac {3 k - 2 \epsilon k + 2}{4}}\right), +$$ + +which surpasses the convergence rate of local Fréchet regression $O_{p}(n_{0}^{-4 / 5})$ [26]. + +There is also a parallel version of Theorem 3, built upon the following condition. + +Condition 6. Suppose the regularization parameter satisfies $\lambda \asymp n_0^{-1/2} h^{-1/2 - \epsilon}$ for some $\epsilon > 0$ and for some $\epsilon' > \epsilon$ . There exists a non-empty subset $\mathcal{A} \subset \{1, \ldots, K\}$ such that + +$$ +\frac {h ^ {2} + (n _ {*} h) ^ {- 1 / 2} + (n _ {0} h) ^ {- 1 / 2}}{\min _ {k \in \mathcal {A} ^ {C}} \psi_ {k , L}} = o (1), \quad \frac {\max _ {k \in \mathcal {A}} \psi_ {k , L}}{h ^ {2} + (n _ {*} h) ^ {- 1 / 2} + n _ {0} ^ {- 1 / 2} h ^ {- 1 / 2 + \epsilon^ {\prime}}} = O (1), +$$ + +where $\mathcal{A}^C = \{1,\dots ,K\} \backslash \mathcal{A},n_* = \min_{1\leq k\leq K}n_k$ and $\psi_{k,L} = \| f_{\oplus}^{(0)}(x) - f_{\oplus}^{(k)}(x)\| _2$ + +Theorem 5. Assume Conditions 5 and 6 hold. Then for the ALWaTL algorithm we have + +$$ +\begin{array}{l} d _ {\mathcal {W}} ^ {2} (\widehat {m} _ {L, h} ^ {(0)} (x), m (x)) \\ = O _ {p} \left(h ^ {4} + n _ {0} ^ {- 1 / 2} h ^ {- 1 / 2 - \epsilon} \left[ \max _ {k \in \mathcal {A}} \psi_ {k, L} + h ^ {2} \right. \right. \\ \left. + h ^ {- 1 / 2} \Big (\big (\sum_ {k \in \mathcal {A} \cup \{0 \}} n _ {k} ^ {- 1} \big) ^ {1 / 2} + \sum_ {k \in \mathcal {A} \cup \{0 \}} \frac {\sqrt {n _ {k}}}{\sum_ {k \in \mathcal {A} \cup \{0 \}} n _ {k}} \big) \Big ]\right). \\ \end{array} +$$ + +Proof. We first prove $P(\widehat{\mathcal{A}} = \mathcal{A})\to 1$ . Note that + +$$ +\begin{array}{l} P(\widehat{\mathcal{A}} = \mathcal{A})\geq P(\max_{k\in \mathcal{A}}\widehat{\psi}_{k,h} < \min_{k\in \mathcal{A}^{C}}\widehat{\psi}_{k,h}) \\ \geq 1 - P \left(\max _ {1 < k \leq K} \left(| \widehat {\psi} _ {k, h} - \psi_ {k, h} | + | \psi_ {k, h} - \psi_ {k, L} |\right) > \left| \max _ {k \in A} \psi_ {k, L} - \min _ {k \in A ^ {C}} \psi_ {k, L} \right|\right) \\ \geq 1 - K \max _ {1 \leq k ^ {\prime} \leq K} P (| \widehat {\psi} _ {k ^ {\prime}} - \psi_ {k ^ {\prime}} | + | \psi_ {k ^ {\prime}, h} - \psi_ {k ^ {\prime}, L} |) > \left| \max _ {k \in \mathcal {A}} \psi_ {k, L} - \min _ {k \in \mathcal {A} ^ {C}} \psi_ {k, L} \right|. \\ \end{array} +$$ + +According to Theorem 3 of [26], $|\psi_{k,h} - \psi_{k,L}| = O(h^2)$ . Utilizing Theorem 4 of [26], $|\psi_{k,h} - \widehat{\psi}_{k,h}| = O_p((n_kh)^{-1 / 2} + (n_0h)^{-1 / 2})$ . According to Condition 6, + +$$ +\frac {\left| \operatorname* {m a x} _ {k \in \mathcal {A}} \psi_ {k , L} - \operatorname* {m i n} _ {k \in \mathcal {A} ^ {C}} \psi_ {k , L} \right|}{h ^ {2} + (n _ {*} h) ^ {- 1 / 2} + (n _ {0} h) ^ {- 1 / 2}} \to \infty . +$$ + +Hence + +$$ +P (\hat {\mathcal {A}} = \mathcal {A}) = 1 - o (1). +$$ + +Then utilizing Theorem 4 where we substitute $\mathcal{A}$ for $\{1,\dots ,K\}$ , we have + +$$ +\begin{array}{l} d _ {\mathcal {W}} ^ {2} \left(\widehat {m} _ {L, h} ^ {(0)} (x), m ^ {(0)} (x)\right) \\ = O _ {p} \left(h ^ {4} + n _ {0} ^ {- 1 / 2} h ^ {- 1 / 2 - \epsilon} \left[ \max _ {k \in \mathcal {A}} \psi_ {k, L} + h ^ {2} \right. \right. \\ \left. + h ^ {- 1 / 2} \left(\left(\sum_ {k \in \mathcal {A} \cup \{0 \}} n _ {k} ^ {- 1}\right) ^ {1 / 2} + \sum_ {k \in \mathcal {A} \cup \{0 \}} \frac {\sqrt {n _ {k}}}{\sum_ {k \in \mathcal {A} \cup \{0 \}} n _ {k}}\right)\right]\left. \right). \\ \end{array} +$$ + +![](images/b317f1397ab263b6d3116b1c18f2bde6067253e559c325258a8a4d9019982656.jpg) + +Remark 4. If $n_*$ is much larger than $n_0$ , then the simplified rate is $O_p(h^4 + n_0^{-1/2} h^{3/2 - \epsilon} + n_0^{-1} h^{-1 + \epsilon' - \epsilon})$ . Among bandwidth sequences $h = n_0^r$ , the optimal choice is achieved at $r^* = -\frac{1}{5 - 2\epsilon'}$ , leading to the convergence rate + +$$ +d _ {\mathcal {W}} ^ {2} (\widehat {m} _ {L, h} ^ {(0)} (x), m (x)) = O _ {p} (n _ {0} ^ {- \frac {4 - \epsilon - \epsilon^ {\prime}}{5 - 2 \epsilon^ {\prime}}}), +$$ + +which is faster than the convergence rate of local Fréchet regression $O_{p}(n_{0}^{-4 / 5})$ [26]. + +# E Transfer Learning for Wasserstein Regression with Empirical Measures + +Frechet regression assumes that each distribution is fully observed. However, this assumption is often impractical, since in real-world settings one rarely encounters datasets where each observation itself constitutes a full distribution. To address, Wasserstein regression with empirical distributions has been proposed [43], where distributions are replaced by their empirical versions, reaching a convergence rate of $O_{p}(n^{-1/2} + N_{\min}^{-1/4})$ , where $N_{\min}$ representing the minimum number of observations. In this section we do the same modification to handle this obstacle, providing algorithms for global and local Wasserstein transfer learning with empirical measures (WaTL-EM), respectively. The only difference between them and those with full distributions is that we substitute empirical distributions $\widehat{\nu}_{i}^{(k)}$ defined as $\frac{1}{N_{i}} \sum_{i=1}^{N_{i}^{(k)}} \delta_{y_{ij}^{(k)}}$ for the true but unobservable distributions $\nu_{i}^{(k)}, 1 \leq i \leq n_{k}, 1 \leq k \leq K$ , where we recall the settings in Subsection 3.2 and Appendix D, additionally assume $\{y_{ij}^{(k)}\}_{j=1}^{N_{i}^{(k)}}$ are independently sampled from $\nu_{i}^{(k)}$ and define $\delta_{z}$ as the Dirac measure at $z$ . In addition, we denote $N_{\min}$ as $\min_{i,k} N_{i}^{(k)}$ . + +Theorem 6. Assume Conditions 1 and 2 hold and the regularization parameter satisfies $\lambda \asymp (n_0^{-1/2} + N_{\min}^{-1/4})^{1-\epsilon}$ for some $\epsilon > 0$ . Then, for the WaTL-EM algorithm and a fixed $x \in \mathbb{R}^p$ , it holds that + +$$ +d _ {\mathcal {W}} ^ {2} (\widehat {m} _ {G, E M} ^ {(0)} (x), m _ {G} ^ {(0)} (x)) = O _ {p} \Big (\big (n _ {0} ^ {- 1 / 2} + N _ {\min} ^ {- 1 / 4} \big) ^ {1 - \epsilon} \big (\psi + \frac {\sum_ {k = 0} ^ {K} \sqrt {n _ {k}}}{n _ {0} + n _ {\mathcal {A}}} + (n _ {0} + n _ {\mathcal {A}}) ^ {- 1 / 2} + N _ {\min} ^ {- 1 / 4} \big) \Big). +$$ + +# Algorithm 5 Wasserstein Transfer Learning with Empirical measures (WaTL-EM) + +Input: Target and source data $\{(x_{i}^{(0)},\{y_{ij}^{(0)}\}_{j = 1}^{N_{i}^{(0)}})\}_{i = 1}^{n_{0}}\cup \big(\cup_{1\leq k\leq K}\{(x_{i}^{(k)},\{y_{ij}^{(k)}\}_{j = 1}^{N_{i}^{(k)}}\}_{i = 1}^{n_{k}}\big),$ regularization parameter $\lambda$ , and query point $x\in \mathbb{R}^p$ . + +Output: Target estimator $\widehat{m}_{G,EM}^{(0)}(x)$ + +1: Empirical measures: $\widehat{\nu}_i^{(k)} = \frac{1}{N_i^{(k)}}\sum_{j = 1}^{N_i^{(k)}}\delta_{y_{ij}^{(k)}}$ +2: Weighted auxiliary estimator: $\widehat{f}_{EM}(x) = \frac{1}{n_0 + n_A} \sum_{k=0}^{K} n_k \widehat{f}_{EM}^{(k)}(x)$ , where $\widehat{f}_{EM}^{(k)}(x) = n_k^{-1} \sum_{i=1}^{n_k} s_{iG}^{(k)}(x) F_{\widehat{\nu}_i^{(k)}}^{-1}$ . +3: Bias correction using target data: $\widehat{f}_{0,EM}(x) = \arg \min_{g\in L^2 (0,1)}\frac{1}{n_0}\sum_{i = 1}^{n_0}s_{iG}^{(0)}(x)\| F_{\widehat{p}_i^{(0)}}^{-1} - g\| _2^2 +$ $\lambda \| g - \widehat{f}_{EM}(x)\| _2$ +4: Projection to Wasserstein space: $\widehat{m}_{G,EM}^{(0)}(x) = \arg \min_{\mu \in \mathcal{W}}\left\| F_{\mu}^{-1} - \widehat{f}_{0,EM}(x)\right\| _2$ + +# Algorithm 6 Local Wasserstein Transfer Learning with Empirical measures (LWaTL-EM) + +Input: Target and source data $\{(x_{i}^{(0)},\{y_{ij}^{(0)}\}_{j = 1}^{N_{i}^{(0)}})\}_{i = 1}^{n_{0}}\cup \big(\cup_{1\leq k\leq K}\{(x_{i}^{(k)},\{y_{ij}^{(k)}\}_{j = 1}^{N_{i}^{(k)}}\}_{i = 1}^{n_{k}}\big),$ regularization parameter $\lambda$ , bandwidth $h$ and query point $x\in \mathbb{R}$ . + +Output: Target estimator $\widehat{m}_{L,h}^{(0)}(x)$ + +1: Empirical measures: $\widehat{\nu}_i^{(k)} = \frac{1}{N_i^{(k)}}\sum_{j = 1}^{N_i^{(k)}}\delta_{y_{ij}^{(k)}}$ +2: Weighted auxiliary estimator + +$$ +\widehat {f} _ {h, E M} (x) = \frac {1}{n _ {0} + n _ {\mathcal {A}}} \sum_ {k = 0} ^ {K} n _ {k} \widehat {f} _ {h, E M} ^ {(k)} (x), +$$ + +where $\widehat{f}_{h,EM}^{(k)}(x) = n_k^{-1}\sum_{i = 1}^{n_k}s_{iL}^{(k)}(x,h)F_{\widehat{\nu}_i^{(k)}}^{-1}$ and $n_{\mathcal{A}} = \sum_{k = 1}^{K}n_{k}$ . + +3: Bias correction using target data + +$$ +\widehat{f}_{0h,EM}(x) = \operatorname *{arg min}_{g\in L^{2}(0,1)}\frac{1}{n_{0}}\sum_{i = 1}^{n_{0}}s_{iL}^{(0)}(x,h)\| F_{\widehat{\nu}_{i}^{(0)}}^{-1} - g\|_{2}^{2} + \lambda \| g - \widehat{f}_{h,EM}(x)\|_{2}. +$$ + +4: Projection to Wasserstein space + +$$ +\widehat{m}^{(0)}_{L,h,EM}(x) = \operatorname *{arg min}_{\mu \in \mathcal{W}}\bigl\|F_{\mu}^{-1} - \widehat{f}_{0h,EM}(x)\bigr\|_{2}. +$$ + +where $\psi = \max_{1\leq k\leq K}\| f^{(0)}(x) - f^{(k)}(x)\| _2$ quantifies the maximum discrepancy between the target and source. + +Proof. First note the following result has been established in [43]. $\| \widehat{f}_{EM}^{(k)}(x) - f^{(k)}(x)\| _2 = O_p(n_k^{-1 / 2} + N_{\min}^{-1 / 4})$ + +Also, an analogy of Theorem 1 can be established in a similar way, just note that analysis in proof of 3 we have established that + +$$ +\frac {1}{n} \sum_ {k = 1} ^ {K} \sum_ {i = 1} ^ {n _ {k}} | s _ {i G} ^ {(k)} (x) | = O _ {p} (1). +$$ + +Then using Theorem 7.9 in [3], we have + +$$ +\begin{array}{l} E\Bigl {\lceil}\frac{1}{n}\sum_{k = 1}^{K}\sum_{i = 1}^{n_{k}}|s_{iG}^{(k)}(x)|\| F_{\nu_{i}^{(k)}}^{-1} - F_{\widehat{\nu}_{i}^{(k)}}^{-1}\|_{2}\mid \{(x_{i}^{(0)}, \\ \left. \left\{y _ {i j} ^ {(0)} \right\} _ {j = 1} ^ {N _ {i} ^ {(0)}}) \right\} _ {i = 1} ^ {n _ {0}} \cup \left(\cup_ {1 \leq k \leq K} \left\{\left(x _ {i} ^ {(k)}, \left\{y _ {i j} ^ {(k)} \right\} _ {j = 1} ^ {N _ {i} ^ {(k)}}\right) \right\} _ {i = 1} ^ {n _ {k}}\right) \Bigg ] \\ \leq \frac {1}{2 N _ {\operatorname* {m i n}} ^ {1 / 4}} \frac {1}{n} \sum_ {k = 1} ^ {K} \sum_ {i = 1} ^ {n _ {k}} | s _ {i G} ^ {(k)} (x) | = O _ {p} (N _ {\operatorname* {m i n}} ^ {- 1 / 4}). \\ \end{array} +$$ + +Hence we have $\| \hat{f} -\hat{f}_{EM}\| _2 = O_p(N_{\mathrm{min}}^{-1 / 4})$ . Then the augments in the proof of Theorem 2 still hold if $F_{\nu^{(0)}}^{-1}$ , $\hat{f}^{(0)}$ , $\hat{f}$ are replaced by their empirical-measure version. And the rate turns to be + +$$ +O _ {p} \Big ((n _ {0} ^ {- 1 / 2} + N _ {\min } ^ {- 1 / 4}) ^ {1 - \epsilon} (\psi + \frac {\sum_ {k = 0} ^ {K} \sqrt {n _ {k}}}{n _ {0} + n _ {\mathcal {A}}} + (n _ {0} + n _ {\mathcal {A}}) ^ {- 1 / 2} + N _ {\min } ^ {- 1 / 4}) \Big). +$$ + +![](images/2b5219b1db621513b3e3a768996d2b490959b088a567bdad8e2acdf008d852b7.jpg) + +Theorem 7. Assume Condition 5 holds and the regularization parameter satisfies $\lambda \asymp (n_0^{-1/2}h^{-1/2} + N_{\min}^{-1/4})^{1-\epsilon}$ for some $\epsilon > 0$ . Then, for the LWaTL-EM algorithm and a fixed $x \in \mathbb{R}^p$ , it holds that + +$$ +\begin{array}{l} d _ {\mathcal {W}} ^ {2} \left(\widehat {m} _ {L, h} ^ {(0)} (x), m ^ {(0)} (x)\right) \\ = O _ {p} \left(\left(n _ {0} ^ {- 1 / 2} h ^ {- 1 / 2} + N _ {\min } ^ {- 1 / 4}\right) ^ {1 - \epsilon} \left(N _ {\min } ^ {- 1 / 4} \right. \right. \\ \left. + \psi_ {L} + h ^ {2} + h ^ {- 1 / 2} \left(\sqrt {\sum_ {k = 0} ^ {K} \frac {1}{n _ {k}}} + \sum_ {k = 0} ^ {K} \frac {\sqrt {n _ {k}}}{n _ {0} + n _ {\mathcal {A}}}\right) \right], \\ \end{array} +$$ + +where $\psi_L = \max_{1\leq k\leq K}\| f_{\oplus}^{(0)}(x) - f_{\oplus}^{(k)}(x)\|$ . + +Proof. Using the same arguments in the proof of Theorem 2 in [43] we can show that + +$$ +\| f _ {h} ^ {(0)} (x) - \widehat {f} _ {h, E M} ^ {(0)} (x) \| _ {2} = O _ {p} (n _ {0} ^ {- 1 / 2} h ^ {- 1 / 2} + N _ {\min } ^ {- 1 / 4}). +$$ + +Analysis in the proof of Theorem 4 shows + +$$ +\frac {1}{n} \sum_ {k = 1} ^ {K} \sum_ {i = 1} ^ {n _ {k}} \left| s _ {i L} ^ {(k)} (x, h) \right| = O _ {p} (1). +$$ + +Then using Theorem 7.9 in [3], we have + +$$ +\begin{array}{l} E\Bigl {\lceil}\frac{1}{n}\sum_{k = 1}^{K}\sum_{i = 1}^{n_{k}}|s_{iL}^{(k)}(x,h)|\| F_{\nu_{i}^{(k)}}^{-1} - F_{\widehat{\nu}_{i}^{(k)}}^{-1}\|_{2}\mid \{(x_{i}^{(0)}, \\ \left. \left\{y _ {i j} ^ {(0)} \right\} _ {j = 1} ^ {N _ {i} ^ {(0)}}\right) \left. \right\} _ {i = 1} ^ {n _ {0}} \cup \left(\cup_ {1 \leq k \leq K} \left\{\left(x _ {i} ^ {(k)}, \left\{y _ {i j} ^ {(k)} \right\} _ {j = 1} ^ {N _ {i} ^ {(k)}}\right) \right\} _ {i = 1} ^ {n _ {k}}\right) \Bigg ] \\ \leq \frac {1}{2 N _ {\operatorname* {m i n}} ^ {1 / 4}} \frac {1}{n} \sum_ {k = 1} ^ {K} \sum_ {i = 1} ^ {n _ {k}} | s _ {i L} ^ {(k)} (x, h) | = O _ {p} (N _ {\operatorname* {m i n}} ^ {- 1 / 4}). \\ \end{array} +$$ + +Hence we have $\| \hat{f}_h - \hat{f}_{h,EM}\| _2 = O_p(N_{\min}^{-1 / 4})$ . Then the augments in the proof of Theorem 4 still hold if $F_{\nu_i^{(0)}}^{-1}$ , $\hat{f}_h^{(0)}$ , $\hat{f}_h$ are replaced by their empirical-measure version. And the rate turns to be $O_p\Big((n_0^{-1 / 2}h^{-1 / 2} + N_{\min}^{-1 / 4})^{1 - \epsilon}\big[N_{\min}^{-1 / 4} + \psi_L + h^2 +h^{-1 / 2}\big(\sqrt{\sum_{k = 0}^{K}\frac{1}{n_k}} +\sum_{k = 0}^{K}\frac{\sqrt{n_k}}{n_0 + n_A}\big)\big]\Big)$ . \ No newline at end of file diff --git a/NeurIPS/2025/Wasserstein Transfer Learning/images.zip b/NeurIPS/2025/Wasserstein Transfer Learning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..cfc8887848bbf658298622b28fe565a477664c66 --- /dev/null +++ b/NeurIPS/2025/Wasserstein Transfer Learning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:26536314fdb0b5cc9b7a26fa7c0082ba9f1494d8cbd9f17a50eab5732a32b3a8 +size 1316131 diff --git a/NeurIPS/2025/Wasserstein Transfer Learning/layout.json b/NeurIPS/2025/Wasserstein Transfer Learning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5c14ac5d977032030a99b524ed0f61e310d1748c --- /dev/null +++ b/NeurIPS/2025/Wasserstein Transfer Learning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:27fddf45a079f183b277f43f923345426d8e7a0c824f5be101463ba82f0e8d7d +size 1465923 diff --git a/NeurIPS/2025/Watch and Listen_ Understanding Audio-Visual-Speech Moments with Multimodal LLM/85ec6fc7-4946-42fc-bcea-a7c0495ae730_content_list.json b/NeurIPS/2025/Watch and Listen_ Understanding Audio-Visual-Speech Moments with Multimodal LLM/85ec6fc7-4946-42fc-bcea-a7c0495ae730_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..db9031b0d9ffd27dd23090f65a2f240dbc25f7a7 --- /dev/null +++ b/NeurIPS/2025/Watch and Listen_ Understanding Audio-Visual-Speech Moments with Multimodal LLM/85ec6fc7-4946-42fc-bcea-a7c0495ae730_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:085a569e7d1576b2e3d5ce0683b2e2c24a9fe90e6e3914b5e977752a420b1545 +size 171196 diff --git a/NeurIPS/2025/Watch and Listen_ Understanding Audio-Visual-Speech Moments with Multimodal LLM/85ec6fc7-4946-42fc-bcea-a7c0495ae730_model.json b/NeurIPS/2025/Watch and Listen_ Understanding Audio-Visual-Speech Moments with Multimodal LLM/85ec6fc7-4946-42fc-bcea-a7c0495ae730_model.json new file mode 100644 index 0000000000000000000000000000000000000000..43dfab72cf64f49a64c015c0056fab810d5901fe --- /dev/null +++ b/NeurIPS/2025/Watch and Listen_ Understanding Audio-Visual-Speech Moments with Multimodal LLM/85ec6fc7-4946-42fc-bcea-a7c0495ae730_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a6f6cdb2109e06ce53557feb9b4c44004c4cef92fe4c17366f5e2637657e165 +size 222100 diff --git a/NeurIPS/2025/Watch and Listen_ Understanding Audio-Visual-Speech Moments with Multimodal LLM/85ec6fc7-4946-42fc-bcea-a7c0495ae730_origin.pdf b/NeurIPS/2025/Watch and Listen_ Understanding Audio-Visual-Speech Moments with Multimodal LLM/85ec6fc7-4946-42fc-bcea-a7c0495ae730_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..667e763664edfab2d8c554ab15c21073ff7bf400 --- /dev/null +++ b/NeurIPS/2025/Watch and Listen_ Understanding Audio-Visual-Speech Moments with Multimodal LLM/85ec6fc7-4946-42fc-bcea-a7c0495ae730_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:836a66b0b73e5a62cb4a223ff2518045175bfdb6c92e1ec427752c8808469162 +size 5503192 diff --git a/NeurIPS/2025/Watch and Listen_ Understanding Audio-Visual-Speech Moments with Multimodal LLM/full.md b/NeurIPS/2025/Watch and Listen_ Understanding Audio-Visual-Speech Moments with Multimodal LLM/full.md new file mode 100644 index 0000000000000000000000000000000000000000..faef25f4bbb7d136e6311d01eb37dbd3853a8763 --- /dev/null +++ b/NeurIPS/2025/Watch and Listen_ Understanding Audio-Visual-Speech Moments with Multimodal LLM/full.md @@ -0,0 +1,841 @@ +# Watch and Listen: Understanding Audio-Visual-Speech Moments with Multimodal LLM + +Zinuo Li $^{1}$ , Xian Zhang $^{1}$ , Yongxin Guo $^{2}$ , Mohammed Bennamoun $^{1}$ , Farid Boussaid $^{1}$ , Girish Dwivedi $^{1}$ , Luqi Gong $^{3*}$ , Qiuhong Ke $^{4*}$ + +https://github.com/zinuoli/TriSense + +1University of Western Australia 2Alibaba Group 3Zhejiang Laboratory 4Monash University + +![](images/28765dfdaebf76070ae1185db15804b336d75032ec68ede6756c629a1e81bdca.jpg) + +![](images/6f9dd2ca75de4c10915881718ae0efbf324806220ab3f55ae13085ed2ddcb3ca.jpg) + +![](images/2776f2d28fb95053ac3d58393cebf02aa8fd3cf8414d6bfcb2c42855fbd8483d.jpg) + +![](images/a12735efad3f1f5d088207eb4a666db6f73f2213b57646d4ce0ea7bd988edcb5.jpg) + +![](images/f3bec47393c09c9c80fcefc85e4260b3b0c348c37cc5acd384a1875c1e586426.jpg) +Speech: 35 yuan. +It's like 30 cents, +can you believe... + +![](images/b88e8daadb7b1924db368fcda4843b3ddf7635db8d8dc51ecb769c7fdc42a48e.jpg) + +![](images/702a1fadad690e57e93c29bf69c9cf059925829174e70d2141316cddc488104d.jpg) + +![](images/91075a538d8a1f35ddf9e8e3382830b784cfab704ed38d72d5709a87e4758095.jpg) + +![](images/27ec6fb093673994c07ee08e9c9fc6134868d33a179a971284dcd3c2bdbb988d.jpg) +Sound: Street Noise + +![](images/2e59821a1a146a7221476505680d9d6af3820d6a347e7d5c4769ea2f65d609a7.jpg) +Speech: People start to gather in this area, and this place is also known to very cheap street food and take out food... +Sound: Street Noise + +![](images/dfbb38ea0c4d36186534b8e8783ac3be43a62adcd3a1edabe6c5625c6623281e.jpg) +Speech: By the way guys, this video is sponsored by boksu, you guys know, we're trying out the boxes snacks for over a year... +Sound: Street Noise + +![](images/3d2e463d3282602cef0fde47cec226605b23f3bb16c5f60b0af2d0ea2b6b45e1.jpg) +Speech: I really really recommend if you've never been to one, to visit one of these local areas, and walk around one of the... +Sound: Street Noise, People Talking +Figure 1: TriSense supports segment captioning and moment retrieval for videos from audio, visual, and speech modalities, as well as any combination of them, covering a total of eight different tasks. + +![](images/b1cfe653843d1567bfa1ab500e1882c4df052f7bfa1265f75fa28ab04730245e.jpg) +Speech: Ok guys, there's one more shopping day in this area, and instead of me like backtracking... +Sound: Street Noise, People Talking + +# Abstract + +Humans naturally understand moments in a video by integrating visual and auditory cues. For example, localizing a scene in the video like "A scientist passionately speaks on wildlife conservation as dramatic orchestral music plays, with the audience nodding and applauding" requires simultaneous processing of visual, audio, and speech signals. However, existing models often struggle to effectively fuse and interpret audio information, limiting their capacity for comprehensive video temporal understanding. To address this, we present TriSense, a triple-modality large language model designed for holistic video temporal understanding through the integration of visual, audio, and speech modalities. Central to TriSense is a Query-Based Connector that adaptively reweights modality contributions based on the input query, enabling robust performance under modality dropout and allowing flexible combinations of available inputs. To support TriSense's multimodal capabilities, we introduce TriSense-2M, a high-quality dataset of over 2 million curated samples generated via an automated pipeline powered by fine-tuned LLMs. TriSense-2M includes long-form videos and diverse modality combinations, facilitating broad generalization. Extensive experiments across multiple benchmarks demonstrate the effectiveness of TriSense and its potential to advance multimodal video analysis. + +# 1 Introduction + +Human understanding of real-world events is inherently multimodal: we rely not only on vision but also on spoken language and audio cues to make sense of what is happening in a video. This integration allows us to interpret intention, emotion, and the significance of events—going beyond what is seen to include what is heard and said. However, existing Multimodal Large Language Models (MLLMs) often fall short of achieving this level of nuanced video temporal understanding. While advances in vision-language modeling and temporal localization have improved language–visual alignment [11, 23, 15, 12, 38, 19, 28], most models still rely solely on visual inputs. As a result, they perform poorly on tasks requiring the integration of audio and speech—particularly when one or more modalities are missing, noisy, or contextually irrelevant. This stands in contrast to human perceptual robustness and significantly limits model generalization in real-world scenarios, such as accurately localizing events or generating rich multimodal descriptions. + +Challenges and Current Limitations. As highlighted above, the current state of multimodal temporal understanding remains limited. Building on these observations, two core challenges continue to hinder progress in multimodal temporal understanding: 1) Insufficient and Incomplete Training Data: Current datasets are often composed of short clips and lack large-scale, fully annotated examples across all three modalities—vision, audio, and speech—which are essential for effective multimodal pretraining [14, 8, 13, 12]. This scarcity hampers the development of robust MLLMs. Moreover, real-world videos often contain incomplete or inconsistent modality coverage, due to factors like variable recording setups, intentional omissions (e.g., silent footage or background music), or the natural absence of certain signals in specific scenes. When models are trained predominantly on videos with all modalities present, they often fail at inference time when confronted with missing or degraded inputs—a common scenario in the wild. 2) Lack of Modality Adaptation: Current MLLMs are generally not equipped to assess the relative importance of each modality based on task or query context. Recent models such as LongVALE [10] and Qwen2.5-Omni [34] attempt to integrate multiple modalities but fall short in adaptivity. For instance, LongVALE compresses all modality tokens into a single representation, resulting in information loss and poor handling of missing modalities. It also lacks an adaptive dropout strategy, leading to unstable performance when modality availability varies. Qwen2.5-Omni introduces temporal positional embeddings, but still fails to capture fine-grained temporal dependencies, limiting its effectiveness on complex moment-level tasks, as demonstrated in our experiments. + +Key Contributions. We argue that understanding complex moments in video requires not only broader modality coverage but also an adaptive mechanism to selectively emphasize the most relevant modalities depending on the task and query. Our approach addresses these challenges through the following key contributions: + +1) We introduce TriSense-2M, a large-scale multimodal dataset containing 2 million annotations. Each video instance in the dataset includes event-based annotations across vision, audio, and speech modalities, with flexible combinations and natural absences of modalities. The dataset supports a wide variety of scenes and includes long-form videos averaging 905 seconds—significantly longer than those in existing datasets—enabling deeper and more realistic temporal understanding. Importantly, queries are expressed in high-quality natural language, aligned with temporal annotations, and span diverse modality configurations to facilitate robust multimodal learning. +2) We propose TriSense, a triple-modality MLLM designed for both video segment captioning and moment retrieval under diverse modality configurations. As depicted in Figure 1, TriSense is designed to handle multimodal video data with varying availability of vision, audio, and speech over temporal dimension. Crucially, it includes a Query-Based Connector that dynamically adjusts modality weights based on the query's content and context. This allows the model to emphasize the most informative modalities (e.g., prioritizing vision if most relevant) while down-weighting irrelevant or missing ones—enabling strong performance even under incomplete modality conditions. +3) We conduct extensive experiments on two core tasks—video segment captioning and moment retrieval—across eight modality configurations, including zero-shot evaluation on public benchmarks. TriSense achieves strong performance on both the new TriSense-2M dataset and existing benchmarks, laying a solid foundation for future research in multimodal temporal video understanding. + +# 2 Related Work + +# 2.1 Video Temporal Understanding MLLMs + +Video temporal understanding focuses on modeling how events unfold over time within a video, enabling tasks such as moment retrieval, segment captioning, and dense video captioning. Vision-language models (VLMs) have demonstrated strong capabilities in solving real-world problems, including in zero-shot scenarios without task-specific fine-tuning. However, many of these models still face challenges when it comes to understanding temporal dynamics [36, 5]. To address this, several models have been fine-tuned on video grounding datasets that emphasize temporal structure—such as TimeChat [27], VTimeLLM [15]—to enhance their temporal reasoning abilities. More recently, Momentor [23] introduced a time encoder to correct errors caused by time token quantization, while VTGLLM [12] employed specialized time tokens and temporal position embeddings to help video LLMs better capture the timing of events. In a different approach, TRACE [11] applied principles from causal language modeling to propose causal event modeling for videos. It also introduced a lightweight time tower for encoding temporal information, achieving solid performance in temporal understanding. + +Extending beyond vision-language modeling, Multimodal Large Language Models (MLLMs) integrate visual, audio, and speech modalities to enable richer video analysis. To improve performance, recent efforts have focused on incorporating these additional modalities. For example, LongVALE [10] compresses all modality tokens into a single token, but this leads to loss of fine-grained details such as audio context or speech intonation. Qwen2.5-Omni [34], on the other hand, introduces the Thinker-Talker architecture—a fully end-to-end multimodal system that handles text, images, audio, and video inputs. To better align audio and video along the temporal dimension, the authors propose TMRoPE, a timing-aware positional encoding technique. However, our experiments show that this model still struggles with grounding in long-form videos. + +Despite these advancements, many existing models are either limited to visual modalities or lack support for flexible combinations of different modalities. This restricts their effectiveness in temporal understanding tasks, particularly in scenarios where some modalities may be absent or noisy. These challenges motivate our pursuit of MLLMs that can robustly handle various modality combinations while maintaining strong temporal reasoning and general understanding performance. + +# 2.2 Benchmarks for Temporal Understanding + +The development of benchmark datasets has played a crucial role in advancing video temporal understanding. Early contributions such as DiDeMo [14] introduced natural language queries for moment localization in videos. Subsequent datasets like Charades-STA [8] and ActivityNet Captions [18] expanded on this by covering a wider variety of actions and longer video durations, significantly pushing the field forward. More recently, InternVid2 [32] has emerged as a large-scale foundational dataset, offering 61 million audio-visual-speech annotations. However, many of these annotations are disjointed, lacking coherence across modalities, and the dataset contains a substantial number of low-quality captions due to its scale. + +To address these limitations, VAST-27M [3] and VALOR [2] were introduced. Both datasets offer high-quality, omni-modal (audio-visual-speech) annotations with better interrelated features than InternVid2, supporting comprehensive multimodal understanding for MLLMs. Nonetheless, they rely on simple concatenation of captions across modalities and do not incorporate cross-modal reasoning. Moreover, these benchmarks provide only coarse-grained captions for short clips, making them inadequate for fine-grained understanding of long-form video. Although both datasets improve on InternVid2, they repeat similar limitations: their annotations are not contextually integrated across modalities, and their temporal granularity is too coarse for modeling nuanced event transitions in extended videos. In response to these shortcomings, LongVALE [10] was proposed, featuring 108K high-quality annotations with omni-modal coverage. While it offers notable improvements, VAST-27M, VALOR, and LongVALE all overlook a critical issue: the dynamic presence of modalities. In real-world videos, audio, visual, and speech inputs are not always available simultaneously, raising important questions about model robustness in the face of missing modalities. + +In conclusion, existing benchmarks often focus exclusively on visual information or fail to adequately support flexible multimodal integration. These limitations highlight the need for improved datasets and serve as a key motivation for our work. + +# 3 Data Construction + +As discussed in Sections 1 and 2, while some existing datasets include all three modalities—visual, audio, and speech—they generally assume that these modalities are always present simultaneously [3, 2, 10], overlooking the importance of supporting arbitrary combinations. This assumption limits the development of models that can handle missing or partial inputs effectively. To address this, we introduce a new large-scale, high-quality dataset designed to support both fully multimodal and partially multimodal scenarios. + +![](images/02f37b00420f448ca82b291686e32589f37a35137780f15208f358c196da7153.jpg) +Figure 2: We employ an automated framework to build our dataset by leveraging modality-specific captions from vision, audio, and speech streams. Two large language models (LLMs) are trained for this process: a Generator, which fuses the three input captions into multi-modal outputs (AVS, AV, VS), and a Judger, which evaluates the semantic quality of the generated captions. The Judger assigns an average quality score between 0 and 5 based on alignment with the original inputs. Samples scoring $\geq 3$ are retained, while those scoring $< 3$ are discarded. + +Our dataset includes longer video durations, making it suitable for realistic and fine-grained temporal grounding tasks. It enables models to "watch and listen" in a human-like manner, flexibly attending to available visual, auditory, and speech cues to identify relevant moments. In addition, we include caption data to promote deeper video understanding and narrative generation. To support scalable and consistent annotation, we developed a fully automated data construction pipeline, as shown in Figure 2. + +We begin by selecting subsets from InternVid [32] and VAST [3] as our raw sources for both video content and initial captions. Each video clip is annotated with three distinct captions: a visual caption that describes observable scenes and actions, an audio caption that details acoustic elements, and a speech caption that transcribes spoken content. These modality-specific captions are generated using specialized annotation pipelines adapted from previous works [32, 3]. + +To enable reasoning across modalities, our goal is to synthesize omni-modal captions that flexibly combine distinct unimodal annotations. These are crucial for training models capable of comprehensive temporal understanding. To generate these captions, we use two custom-trained large language models based on Qwen2.5- + +![](images/3e0511827bd944044bb7314b8d808168ebac109145d367658a52b0a0e9e7cea5.jpg) +Figure 3: Video duration distribution. Most videos are 10-20 minutes long $(83.5\%)$ , supporting long-form temporal understanding. + +72B [35]: a Generator and a Judger. The Generator merges modality-specific captions into unified representations for three combinations: AVS (Audio-Visual-Speech), AV (Audio-Visual), and VS (Visual-Speech). These captions are designed to capture cross-modal interactions, such as clapping that aligns with vocal rhythm or speech that corresponds with visual context. The Judger evaluates the quality of each synthesized caption by measuring its semantic alignment with the original unimodal annotations. It assigns a quality score ranging from 0 to 5 and filters out samples with inconsistencies, such as speech that does not relate to visual actions or mismatched audio-visual descriptions. To train these models, we first build a high-quality reference corpus using GPT-o1 [16], which is then + +manually refined and filtered. From this curated set, we select 10,000 samples to train the Generator and 3,000 samples to train the Judger. Further training details are provided in the Appendix. + +Table 1: Comparison of temporal understanding datasets. TriSense-2M uniquely supports all three modalities with long video lengths and explicit handling of modality dropout. + +
DatasetsAnnotationsAvg. lenVisualAudioSpeechModality Dropout
VALOR [2]1.32M10sXX
VAST [3]27M30sXX
UnAV-100 [9]30K42.1sXX
Charades-STA [8]16K30sXXX
ActivityNet-Captions [13]20K180sXXX
LongVALE [10]108K235sX
TriSense-2M2M905s
+ +Table 2: Detailed Information of TriSense-2M, where Avg/Min/Max Duration represent the average, minimum, and maximum duration, respectively. 0–5s / 5–10s / 10–15s, etc., represent the proportions of different duration intervals. + +
Total EventsAvg DurationMin DurationMax Duration0-5s5-10s10-15s15-20s20-30s
19405226.87s2.00s30.00s35.5%46.2%12.6%4.2%1.5%
+ +The data construction pipeline processes an initial pool of 5 million multimodal video samples containing visual, audio, and speech captions. Through multiple rounds of generation, evaluation, and filtering, the Judger retains only high-quality outputs, resulting in a final dataset of 2 million samples drawn from approximately 38,000 long videos. The distribution of video durations is shown in Figure 3. The average video length is 905 seconds, nearly four times longer than that of the closest existing dataset, which averages 235 seconds [10]. This curated dataset enables robust temporal reasoning across diverse modality combinations and forms the foundation for training and evaluating our TriSense model. Overall comparisons with existing datasets are provided in Table 1, and more detailed examples are included in the Appendix. + +# 4 TriSense Architecture + +The overall architecture of the TriSense model is illustrated in Figure 4. The model is designed to process visual, audio, and speech information extracted from a video in order to answer a text-based query. Each modality is first processed by one of three specialized expert encoders [24, 26, 4]. The resulting feature representations are then passed through modality-specific projectors and integrated with the query using Cross-Attention mechanisms, allowing the model to capture fine-grained interactions between each modality and the query. A Query-Based Connector subsequently fuses the modality representations and adaptively adjusts their contributions based on the content of the query. This fused multimodal representation is enriched with temporal information and then input into a Large Language Model (LLM) backbone, which produces the final output. To effectively model temporal dependencies, the architecture includes a dedicated Time Encoder [11] and employs task-specific heads to ensure outputs are temporally aligned and task-relevant. + +# 4.1 Multimodal Information Extraction + +During training, given a video $V = \{F_1, F_2, \dots, F_T\}$ , where $T$ denotes the total number of frames, we first uniformly select $n = 64$ frames for further processing and controlling memory usage. For each selected frame, we record its timestamp and extract an audio segment of $\pm 1$ second around the frame, resulting in a sequence of audio segments $A = \{A_1, A_2, \dots, A_n\}$ , each lasting 2 seconds. For instance, if a frame is sampled at 123.4 seconds, the corresponding audio segment spans from 122.4 to 124.4 seconds. We then use pretrained expert encoders to extract modality-specific tokens: $f_i^v$ , $f_i^a$ , $f_i^s$ for vision, audio, and speech, respectively. + +To enhance temporal awareness of the model, we encode the selected timestamp using a Time Encoder [11]. Each timestamp is first tokenized into a fixed-length character sequence comprising four integer digits, a decimal point, and one fractional digit, resulting in 6 tokens per frame. For example, the timestamp [123.4] is tokenized to $\langle 0\rangle \langle 1\rangle \langle 2\rangle \langle 3\rangle \langle .\rangle \langle 4\rangle$ . These tokens are then embedded to form time features $f(t)$ . Given the high token count from the three modalities, we apply Slot-Based + +![](images/a8ad62d508dc97054320b22da7d69ac11c6506f2db96583b1d41a851426eddbb.jpg) +Figure 4: Architecture of the TriSense model. The model processes vision, audio, and speech via dedicated encoders and fuses them using a Query-Based Connector that assigns weights based on the query. The fused output, combined with temporal embeddings, is passed to an LLM for generating timestamped or textual responses. + +Compression [12] as Modality Projector to reduce the dimensionality of the input. This technique compresses the vision tokens $f_{i}^{v}$ , audio tokens $f_{i}^{a}$ , and speech tokens $f_{i}^{s}$ , into fixed-length vectors of 16 tokens each, denoted as $f_{i}^{\prime v}$ , $f_{i}^{\prime a}$ and $f_{i}^{\prime s}$ . + +# 4.2 Query-Based Connector + +To more effectively integrate multimodal inputs with the query and enhance sensitivity to salient features, we introduce a Query-Based Connector that adaptively balances the contributions of each modality based on the query's content, as illustrated in Figure 4. The compressed modality features $f_{i}^{\prime v}, f_{i}^{\prime a}$ and $f_{i}^{\prime s}$ obtained from Section 4.1, are passed through Cross-Attention layers, where they interact with the encoded query representation $f^{(q)}$ . The objective is for each attention layer to emphasize features that are most relevant to the query. The outputs of these layers are denoted as query-relevant features $f_{i}^{v,q}, f_{i}^{a,q}$ and $f_{i}^{s,q}$ , which reflect the alignment between each modality and the query. + +To dynamically determine the importance of each modality in relation to the query, we introduce an adaptive weighting mechanism. First, we apply global average pooling over the sequence dimension of each modality to derive compact global representations $c_v$ , $c_a$ , and $c_s$ . These vectors are concatenated and fed into a single-layer MLP $\mathcal{F}(\cdot)$ to generate unnormalized weights $w_v$ , $w_a$ , and $w_s$ . The weights are then normalized using a softmax function to yield a valid probability distribution over the modalities, satisfying the constraint $w_v + w_a + w_s = 1$ . The computation is formalized below: + +$$ +\mathbf {c} _ {m} = \frac {1}{T _ {m}} \sum_ {t = 1} ^ {T _ {m}} \mathbf {x} _ {m, t}, \quad m \in \{v, a, s \} \tag {1} +$$ + +$$ +\tilde {w} _ {m} = \mathcal {F} \left(\left[ c _ {v} \| c _ {a} \| c _ {s} \right]\right) \in \mathbb {R} ^ {3}, \quad m \in \{v, a, s \} \tag {2} +$$ + +$$ +w _ {m} = \frac {\exp \left(\tilde {w} _ {m}\right)}{\sum_ {m ^ {\prime} \in \{v , a , s \}} \exp \left(\tilde {w} _ {m ^ {\prime}}\right)}, \quad m \in \{v, a, s \} \tag {3} +$$ + +Here, $\mathbf{c}_m$ is the compressed modality vector after global average pooling, $T_m$ indicates the sequence length of distinct modality $m$ , $\|\cdot\|$ denotes vector concatenation, and $\tilde{w}_m$ and $w_m$ represent unnormalized and normalized weights, respectively. + +After computing the weights, we multiply the previously obtained query-relevant features by their corresponding weights and concatenate them together to fuse a multimodal representation. To reduce the token count, we apply slot compression $C_{comp}(\cdot)$ again to compress the tokens from triple the amount back to a single scale. Finally, a two-layer MLP $\hat{\mathcal{F}} (\cdot)$ is used for further feature + +refinement, aligning the representation with the LLM's input dimensionality and enhancing its expressive capacity: + +$$ +\mathcal {X} _ {m} = \hat {\mathcal {F}} (C _ {c o m p} (\operatorname {c o n c a t} (w _ {v} f _ {i} ^ {v, q}, w _ {a} f _ {i} ^ {a, q}, w _ {s} f _ {i} ^ {s, q}))) \tag {4} +$$ + +This adaptive fusion allows the model to emphasize the most informative modalities while reducing the influence of less relevant ones, based on the specific query. The resulting representation $\mathcal{X}_m$ is combined with the corresponding time embeddings $f(t)$ (as introduced in Section 4.1) and passed into the LLM backbone, which uses its contextual reasoning capabilities to generate the final output. + +# 4.3 Causal Event Prediction + +To enhance the model's temporal reasoning capabilities and better align predictions with the underlying structure of video narratives, we employ causal event prediction, a method shown to be effective in prior work [11]. This approach enables the model to reason about cause-and-effect relationships across time, predicting upcoming events based on prior context. Specifically, given a video $V$ , we segment it into a sequence of events $\{e_1, e_2, \dots, e_K\}$ , where each event $e_k = (t_k, c_k)$ consists of a timestamp $t_k$ and an associated caption $c_k$ describing the video segment: + +$$ +\mathbf {V} = \left\{e _ {1}, e _ {2}, \dots , e _ {k} \right\} = \left\{\left(t _ {k}, c _ {k}\right) | 1 \leq k \leq K \right\}. \tag {5} +$$ + +Our goal is to predict the next event $e_k$ conditioned on the sequence of prior events $e_{1:k-1}$ , the user-provided query $\mathbf{Q}$ , and the multimodal features $\mathcal{X}_m$ produced by the Query-Based Connector: + +$$ +\mathcal {P} \left(e _ {k} \mid e _ {1: k - 1}, f ^ {(q)}, \mathcal {X} _ {m}\right) = \mathcal {P} \left(t _ {k}, c _ {k} \mid e _ {1: k - 1}, f ^ {(q)}, \mathcal {X} _ {m}\right) \tag {6} +$$ + +To support both temporal and textual outputs, we introduce adaptive head switching via a special $\langle \mathrm{sync}\rangle$ token during LLM generation. This token is appended to the vocabulary and serves as a control signal that guides the model to switch between time head and language model (LM) head, as illustrated in Figure 4. When the $\langle \mathrm{sync}\rangle$ token is encountered, the LLM transitions between decoding modalities to generate either timestamp-aligned predictions or free-form textual outputs, depending on the task. + +# 5 Experiments + +In this section, we present the core experiments conducted to evaluate the performance of our proposed model. Due to space constraints, implementation details, training procedures, and additional experimental results are provided in the Appendix. + +# 5.1 Evaluation Datasets, Metrics and Baseline Models + +To rigorously assess the effectiveness of TriSense, we conduct evaluations on two key temporal understanding tasks: + +- Segment Captioning (SC). This task involves generating descriptive captions that accurately summarize the events occurring throughout a video. We evaluate our model on the newly introduced TriSense-2M dataset, and provide additional datasets in the Appendix. The performances are reported in BLEU-4 [22], CIDEr [30], ROUGE_L [20] and METEOR [1] to gauge the quality and accuracy of the generated captions. +- Moment Retrieval (MR). In this task, the model is required to retrieve specific segments within a video that correspond to a given textual query. We evaluate performance on TriSense-2M, as well as two widely used public benchmarks: Charades-STA [8] and ActivityNet-Captions [13]. Retrieval effectiveness is reported using Recall@IoU=0.5, Recall@IoU=0.7, and mean IoU (mIoU), providing a comprehensive view of the model's localization accuracy at varying overlap thresholds. + +The modality combinations are set to Audio-Visual-Speech (AVS), Visual-Speech (VS), Audio-Visual (AV) and Visual-Only (V). To establish a solid comparative baseline, we select representative models specifically designed for Video Temporal Grounding (VTG) tasks. These include VTimeLLM [15], TimeChat [27], VTG-LLM [12], TRACE [11], along with two recent omni-modal models LongVALE [10] and Qwen2.5-Omni [34]. While Momentor [23], Hawkeye [31] and NumPro-FT [33] are not included in the TriSense-2M benchmark due to the unavailability of model checkpoints and their lack of support for captioning, they are included in public benchmark evaluations according to the + +Table 3: Segment Captioning results. Performance is reported on four modality settings using BLEU-4 (B), METEOR (M), ROUGE-L (R), and CIDEr (C). Best and second-best results are in bold and underlined, respectively. + +
ModelAVS-SCVS-SCAV-SCV-SC
BMRCBMRCBMRCBMRC
VTimeLLM (7B)0.88.216.12.41.28.816.93.11.310.317.92.61.410.418.24.0
TimeChat (7B)0.64.08.70.60.94.99.81.41.15.510.51.50.86.712.55.7
VTG-LLM (7B)0.34.89.60.60.34.910.00.90.45.210.20.90.35.09.81.4
TRACE (7B)1.07.613.51.11.47.814.32.31.69.016.32.61.39.416.89.5
TRACE-unit (7B)1.18.214.71.41.58.315.12.21.69.516.32.31.39.917.68.8
LongVALE (7B)1.28.616.74.92.310.020.15.52.511.421.35.91.511.518.80.9
Qwen2.5-Omn (7B)0.88.813.11.70.88.613.10.81.29.815.11.31.110.114.61.1
TriSense (7B)3.410.120.18.33.010.022.211.85.312.226.315.47.312.630.736.3
+ +Table 4: Moment Retrieval results with 64 frames. Performance is reported as Recall at IoU 0.5 and 0.7 across four modality settings. Best and second-best results are in bold and underlined, respectively. + +
ModelAVS-MRVS-MRAV-MRV-MR
IoU=0.5IoU=0.7IoU=0.5IoU=0.7IoU=0.5IoU=0.7IoU=0.5IoU=0.7
VTimeLLM (7B)0.210.090.280.140.230.080.410.14
TimeChat (7B)0.280.120.270.090.220.080.340.12
VTG-LLM (7B)0.190.080.150.050.210.070.230.06
TRACE (7B)0.390.120.310.150.240.130.420.21
TRACE-uni (7B)0.300.170.350.170.240.180.480.22
LongVALE (7B)0.080.010.070.010.070.010.050.01
Qwen2.5-Omni (7B)0.610.210.610.160.280.070.180.06
TriSense (7B)1.120.420.800.280.570.210.430.22
+ +reports in their official papers. By comparing our model against these baselines across diverse tasks and evaluation metrics, we aim to provide a comprehensive assessment of TriSense's capabilities and its advancements in video temporal understanding. + +# 5.2 Results and Analysis + +Superior Performance on Omni-Modal datasets. We evaluate our performance on the proposed TriSense-2M dataset. As illustrated in Table 3 and Table 4, TriSense consistently outperforms existing video LLMs across nearly all evaluated tasks. It also significantly surpasses latest omni-modal models like LongVALE [10] and Qwen2.5-Omni [34], particularly in the audio-visual-speech (AVS) setting where all three modalities are leveraged. + +We observe that the model shows slightly lower performance on visual-only moment retrieval compared to state-of-the-art vision models, which is likely due to its optimization for multimodal settings rather than visual-only scenario. Also, it is important to note that our model uses only 64 input frames during testing, compared to larger input sizes used in other models—such as 128 frames in TRACE [11] and 100 frames in VTimeLLM [15]. Since TriSense-2M consists mostly of long videos, using fewer frames makes it more difficult for the model to achieve high accuracy in long-video moment retrieval tasks. When the videos become shorter or more frames are used, we gain better performances, this is also supported by the results in Table 6 and Table 7. + +We also conduct experiments on the public Omni-Modal benchmark LongVALE [10]. LongVALE is designed for event understanding across vision, audio, and language modalities, comprising 105,000 omni-modal events with precise temporal annotations and relation-aware captions, collected from 8,400 high-quality long-form videos. The Omni-VTG and Omni-SC tasks named from the official report of LongVALE, which are the same as AVS-MR and AVS-SC in our paper. As summarized in Table 5, our zero-shot performance on the Moment Retrieval task is comparable to LongVALE's performance, even though their model is trained on the same dataset. Although there is a larger gap in the Segment Captioning task, we believe this is due to pattern differences in captioning styles between our Segment Captioning training data and LongVALE's Segment Captioning data. Such differences in caption patterns can lead to noticeable drops across all four evaluation metrics. + +Table 5: Performance on public Omni-Modal benchmark LongVALE [10]. "***" indicates this model is trained on the LongVALE dataset. The best and second results are highlighted in bold and underlined, respectively. + +
ModelOmni-VTG (AVS-MR)Omni-SC (AVS-SC)
R@0.3R@0.5R@0.7mIoUBMRC
VideoChat (7B)2.20.90.43.00.59.60.08.2
VideoChatGPT (7B)4.92.90.95.00.414.00.95.9
VideoLLaMA (7B)2.51.10.31.90.911.50.18.9
PandaGPT (7B)2.51.00.32.20.614.90.38.9
NExT-GPT (7B)4.31.90.74.00.410.20.08.1
TimeChat (7B)5.82.61.15.21.216.11.610.0
VTimeLLM (7B)7.53.41.36.41.014.51.65.5
LongVALE* (7B)15.78.63.911.05.622.420.310.9
TriSense (7B)14.89.34.711.24.821.918.810.4
+ +Table 6: Zero-shot Moment Retrieval results on public benchmarks with 64 frames. " *" indicates this model uses more frames than TriSense. The top and second-best results are highlighted in bold and underlined, respectively. + +
ModelCharades-STAActivityNet-Caption
IoU=0.5IoU=0.7mIoUIoU=0.5IoU=0.7mIoU
VTimeLLM* (7B)27.511.431.227.814.330.4
VTimeLLM* (13B)34.314.734.629.514.231.4
TimeChat* (7B)32.213.4-4.62.06.9
Momentor* (7B)26.611.628.523.012.429.3
HawkEye (7B)31.414.533.729.310.732.7
VTG-LLM* (7B)33.815.7-8.33.712.0
TRACE* (7B)40.319.438.737.724.039.0
TRACE-uni* (7B)43.721.041.538.224.739.4
NumPro-FT* (7B)42.020.641.437.520.638.8
TriSense (7B)42.327.639.839.627.240.1
+ +Zero-shot performance on public Moment Retrieval benchmarks. We also evaluate TriSense in a zero-shot setting on established two classical visual-only datasets, including Charades-STA [8] and ActivityNet-Captions [13], as shown in Table 6. The results show that although TriSense shows slight inferior performance in Table 4, it still achieves competitive performance in visual-only settings, showing especially higher accuracy (IoU=0.7) than others, even with less frames used. + +Table 7: Ablation studies on Moment Retrieval. The top and second-best results are highlighted in bold and underlined, respectively. + +
ModelFrame NumberAVS-MRVS-MRAV-MRV-MR
IoU=0.5IoU=0.7IoU=0.5IoU=0.7IoU=0.5IoU=0.7IoU=0.5IoU=0.7
Training Stages
Stage1 Only640.070.010.060.010.060.000.020.00
Stage1+2640.520.190.430.180.320.120.270.14
Connector
Addition640.710.220.690.210.410.110.220.19
Fixed Weights640.890.380.770.240.520.190.440.23
Frame Number
TriSense (7B)320.740.270.680.180.390.130.240.11
TriSense (7B)641.120.420.800.280.570.210.430.22
TriSense (7B)1281.120.430.870.310.640.320.490.26
+ +Ablation Studies. We conduct ablation experiments to assess the contribution of different components, including the training strategy, the Query-Based Connector, and the number of frames processed by the model. As shown in Table 7 and Table 8, we compare our adaptive weighting strategy against simpler alternatives. The Addition baseline directly sums the modality features without weighting. The Fixed Weights baseline assigns equal weights (e.g., 0.33 each in AVS), or + +fixed values depending on the modality pair (e.g., 0.5 for each active modality and 0 for inactive ones in VS/AV), and uses only the visual stream in visual-only tasks (weight of 1, others set to 0). These comparisons confirm the effectiveness of our query-adaptive weighting mechanism. + +Table 8: Ablation results on Segment Captioning. We analyze the effects of training stages, connector design, and input frame count across four modality settings (AVS, VS, AV, V). Metrics include BLEU-4 (B), METEOR (M), ROUGE-L (R), and CIDEr (C). + +
ModelFrame NumberAVS-SCVS-SCAV-SCV-SC
BMRCBMRCBMRCBMRC
Training Stages
Stage1 Only640.01.54.30.10.01.44.10.10.01.34.10.10.01.34.00.1
Stage1+2642.19.319.86.61.88.620.26.73.011.121.58.25.36.211.813.8
Connector
Addition641.69.918.05.81.89.119.35.83.811.222.011.55.711.128.521.9
Fixed Weights643.19.819.46.62.49.320.77.94.311.826.115.37.412.730.636.7
Frame Number
TriSense (7B)323.29.719.97.72.19.319.57.93.411.122.59.66.311.729.829.2
TriSense (7B)643.410.120.18.33.010.022.211.85.312.226.315.47.312.630.736.3
TriSense (7B)1283.410.220.28.53.19.922.811.55.412.326.715.47.312.830.836.1
+ +We can observe that in Training Stages section, Stage 1 focuses solely on modality alignment and no temporal information is included, therefore does not performing well across two tasks. However, after training in Stage 2, the model acquires around $50\%$ of its capability. For the Connector ablation, we find that simply adding all modalities together does not allow the model to emphasize the more important modalities, resulting in a drop in performance. Similarly, in the Fixed Weights ablation for AVS/VS/AV tasks, assigning equal weights to modalities fails to effectively capture their varying importance, leading to inferior performance compared to using dynamic modality weights. However, fixing visual weight to 1 indeed leads to slightly better performance in visual-only (V) setting. We also observe that increasing the number of frames used by the model (from 64 to 128) leads to performance improvement in every scenario. This trend is consistent with the findings in both of the Moment Retrieval and Segment Captioning. + +# 6 Conclusion + +In this work, we introduced TriSense, a novel multimodal large language model designed to advance comprehensive video understanding by integrating visual, audio, and speech modalities. At the core of our model is the Query-Based Connector, which enables dynamic, modality-adaptive fusion—allowing the system to operate effectively across arbitrary combinations of input modalities. This capability is essential for real-world scenarios, where certain modalities may be partially available or entirely absent. To support progress in this area, we constructed TriSense-2M, a large-scale dataset containing over 2 million carefully curated samples. These samples span diverse scenes, durations, and modality alignments, offering a rich foundation for training and evaluation. Through extensive experimentation, we demonstrated that TriSense consistently achieves state-of-the-art performance on key temporal video understanding tasks, including moment retrieval and long-form video segment captioning. Our modality-adaptive framework marks a substantial step toward more flexible and human-like video understanding systems. It not only delivers strong performance in controlled evaluations but also shows robust applicability in real-world conditions with varying input configurations. We believe that both TriSense and the TriSense-2M dataset will serve as valuable resources for future research in multimodal learning and temporal reasoning, enabling broader advances across a range of video understanding applications. + +# 7 Acknowledgements + +This project is jointly supported by the University of Western Australia (UWA) HDR Scholarship, Australian Research Council ARC DP210101682, DP210102674, and the Australian Government through the Australian Research Council's DECRA funding scheme DE250100030. We thank Zhejiang Lab for providing the computational resources. + +# References + +[1] Satanjeev Banerjee and Alon Lavie. Closer look at summarization evaluations. In Proceedings of the workshop on empirical modeling of semantic equivalence and entailment, pages 1-8, 2005. 7 +[2] Sihan Chen, Xingjian He, Longteng Guo, Xinxin Zhu, Weining Wang, Jinhui Tang, and Jing Liu. Valor: Vision-audio-language omni-perception pretraining model and dataset. arXiv preprint arXiv:2304.08345, 2023. 3, 4, 5 +[3] Sihan Chen, Handong Li, Qunbo Wang, Zijia Zhao, Mingzhen Sun, Xinxin Zhu, and Jing Liu. Vast: A vision-audio-subtitle-text omni-modality foundation model and dataset. Advances in Neural Information Processing Systems, 36, 2024. 3, 4, 5 +[4] Sanyuan Chen, Yu Wu, Chengyi Wang, Shujie Liu, Daniel Tompkins, Zhuo Chen, and Furu Wei. Beats: Audio pre-training with acoustic tokenizers. 2022. 5, 14 +[5] Zesen Cheng, Sicong Leng, Hang Zhang, Yifei Xin, Xin Li, Guanzheng Chen, Yongxin Zhu, Wenqi Zhang, Ziyang Luo, Deli Zhao, et al. Videollama 2: Advancing spatial-temporal modeling and audio understanding in video-llms. arXiv preprint arXiv:2406.07476, 2024. 3 +[6] Konstantinos Drossos, Samuel Lipping, and Tuomas Virtanen. Cloth: An audio captioning dataset. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 736-740. IEEE, 2020. 13, 14 +[7] Chaoyou Fu, Yuhan Dai, Yondong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, et al. Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. arXiv preprint arXiv:2405.21075, 2024. 17, 18 +[8] Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. Tall: Temporal activity localization via language query. In Proceedings of the IEEE international conference on computer vision, pages 5267-5275, 2017. 2, 3, 5, 7, 9 +[9] Tiantian Geng, Teng Wang, Jinming Duan, Runmin Cong, and Feng Zheng. Dense-localizing audio-visual events in untrimmed videos: A large-scale benchmark and baseline. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22942–22951, 2023. 5 +[10] Tiantian Geng, Jinrui Zhang, Qingni Wang, Teng Wang, Jinming Duan, and Feng Zheng. Longvale: Vision-audio-language-event benchmark towards time-aware omni-modal perception of long videos. arXiv preprint arXiv:2411.19772, 2024. 2, 3, 4, 5, 7, 8, 9 +[11] Yongxin Guo, Jingyu Liu, Mingda Li, Xiaoying Tang, Qingbin Liu, and Xi Chen. Trace: Temporal grounding video llm via causal event modeling, 2024. 2, 3, 5, 7, 8, 14, 15, 16, 17 +[12] Yuchen Guo, Linchao Liu, Xin Li, and Ping Luo. Vtg-llm: Efficient temporal grounding in long videos with compressed visual cues. In arXiv preprint arXiv:2401.07684, 2024. 2, 3, 6, 7, 16 +[13] Fabian Caba Heilbron and Juan Carlos Niebles. Activitynet captions: A dense-captioning dataset for evaluating understanding of complex video activities. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5246-5255, 2015. 2, 5, 7, 9 +[14] Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. Localizing moments in video with natural language. In Proceedings of the IEEE international conference on computer vision, pages 5803-5812, 2017. 2, 3 +[15] Junjie Huang, Ming Wu, Linchao Li, Yi Zhu, Yu Chen, Siwei Yan, Yi Liu, and Ping Luo. Vtimellm: Compression of time into a latent embedding for efficient video-language modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16694-16704, 2023. 2, 3, 7, 8 +[16] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720, 2024. 4, 15 +[17] Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mistral 7b, 2023. 14 +[18] Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. Dense-c captioning events in videos. In Proceedings of the IEEE international conference on computer vision, pages 706-715, 2017. 3 +[19] Jiabo Lei, Licheng Yu, Mohit Bansal, and Tamara L Berg. Tallyqa: Answering complex counting questions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14168-14178, 2021. 2 +[20] Chin-Yew Lin and Franz Josef Och. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In Proceedings of the 42nd annual meeting of the association for computational linguistics (ACL-04), pages 605–612, 2004. 7 +[21] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning, 2023. 13, 14 +[22] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318, 2002. 7 + +[23] Yiyuan Qian, Linchao Liu, Xin Li, and Ping Luo. Momentor: Advancing video understanding with temporal reasoning in large language models. In arXiv preprint arXiv:2401.03923, 2024. 2, 3, 7 +[24] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PmLR, 2021. 5 +[25] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PmLR, 2021. 14 +[26] Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision, 2022. 5, 14 +[27] Junjie Ren, Can Li, Ming Zhao, Jinhui Liu, Junchi Yang, and Jian Wang. Timechat: A time-aware large language model for video question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16726-16736, 2023. 3, 7 +[28] Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. Videoclip: Contrastive pre-training for zero-shot video-text understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16632-16642, 2022. 2 +[29] Cassia Valentini-Botinhao et al. Noisy speech database for training speech enhancement algorithms and tts models. University of Edinburgh. School of Informatics. Centre for Speech Technology Research (CSTR), 2017. 13, 14 +[30] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566-4575, 2015. 7 +[31] Hao Wang, Linchao Liu, Xin Li, and Ping Luo. Hawkeye: Visually explainable reasoning in video question answering. In arXiv preprint arXiv:2401.04705, 2024. 7 +[32] Yi Wang, Kunchang Li, Xinhao Li, Jiashuo Yu, Yinan He, Guo Chen, Baoqi Pei, Rongkun Zheng, Jilan Xu, Zun Wang, et al. Internvideo2: Scaling video foundation models for multimodal video understanding. arXiv preprint arXiv:2403.15377, 2024. 3, 4 +[33] Yongliang Wu, Xinting Hu, Yuyang Sun, Yizhou Zhou, Wenbo Zhu, Fengyun Rao, Bernt Schiele, and Xu Yang. Number it: Temporal grounding videos like flipping manga. arXiv preprint arXiv:2411.10332, 2024.7 +[34] Jin Xu, Zhifang Guo, Jinzheng He, Hangrui Hu, Ting He, Shuai Bai, Keqin Chen, Jialin Wang, Yang Fan, Kai Dang, Bin Zhang, Xiong Wang, Yunfei Chu, and Junyang Lin. Qwen2.5-omni technical report. arXiv preprint arXiv:2503.20215, 2025. 2, 3, 7, 8 +[35] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115, 2024. 4 +[36] Hang Zhang, Xin Li, and Lidong Bing. Video-llama: An instruction-tuned audio-visual language model for video understanding. arXiv preprint arXiv:2306.02858, 2023. 3 +[37] Yuanhan Zhang, Jinming Wu, Wei Li, Bo Li, Zejun Ma, Ziwei Liu, and Chunyuan Li. Video instruction tuning with synthetic data, 2024. 13, 14 +[38] Bolei Zhou, Yu Guo, Meng Zhang, Xiongwei Wang, Siyuan Pu, Yuan Wu, Yan Zhang, Yan Wang, and Li Li. Recent advances in video understanding. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023. 2 + +# Contents of Appendix + +A Implementation Details 13 + +A.1 Training Recipe 13 +A.2 Datasets 13 +A.3 Detailed training settings 14 + +B More details of TriSense-2M 15 + +B.1 Training Generator and Judger 15 +B.2 Training data format 15 + +C Additional Experiments. 15 + +C.1 More ablation studies 15 +C.2 General understanding dataset 17 + +D Case studies on different scenarios 18 + +# A Implementation Details + +# A.1 Training Recipe + +Our training process follows a structured three-stage approach: Feature Alignment, Connector Generalization, and Instruction Tuning—to progressively equip the model with strong multimodal and temporal reasoning capabilities. The specific components trained at each stage are illustrated in Figure 4. + +Feature Alignment. In the first stage, only the Query-Based Connector and LM Head are set as trainable, while all other components remain frozen. This phase employs single-modality inputs, enabling the model to gain an initial understanding of each of the three modalities individually. It also helps the model learn to assign weights effectively in the absence of multimodal context, promoting a focused grasp of modality-specific features without interference from other components. + +Connector Generalization. During the second stage, we incorporate mixed-modality data and allow training for the Query-Based Connector, Time Encoder, Time Head, and LM Head, while keeping the LLM backbone fixed. This phase equips the connector to handle weight allocation across multiple modalities, thereby enhancing its generalization beyond isolated modalities. Simultaneously, training the Time Encoder and Time Head introduces the model to temporal structure, laying the groundwork for capturing inter-modality dynamics over time. + +Instruction Tuning. In the final stage, we freeze the Query-Based Connector and train the remaining components—including the Time Encoder, Time Head, LM Head, and the LLM backbone—using mixed-modality inputs. By keeping the connector fixed, we retain the modality alignment learned in previous stages. This step concentrates on refining temporal reasoning and language understanding capabilities, strengthening the LLM's ability to interpret and process multimodal, temporally-sensitive queries across diverse scenarios. + +# A.2 Datasets + +In alignment with the objectives of the three-stage training process outlined earlier, we employ varying datasets and data volumes at each stage. The overarching goal is to enhance the model's capacity for temporal video understanding while retaining robust general video understanding abilities. An overview of the datasets used in each stage is provided in Table 9. + +Table 9: Datasets and sample sizes used across the three training stages. + +
StagesDatasetsTotal Quantity
Stage 1Clotho [6], LLaVA-LCS558K [21], Valentini-Botinhao Speech Dataset [29]600K
Stage 2TriSense-2M (880K), LLaVA-Video-178K (120K) [37]1M
Stage 3TriSense-2M (1.12M), LLaVA-Video-178K (380K) [37]1.5M
+ +Stage1. For the initial stage, we use a combination of the Clotho [6], LLaVA-LCS558K [21], and Valentini-Botinhao Speech Dataset [29] as training dataset: + +- Clotho is an audio captioning dataset containing 4,981 audio clips, each paired with five unique captions, totaling 24,905 annotations. The audio clips range from 15 to 30 seconds, and each caption consists of 8 to 20 words. +- LLaVA-LCS558K is a concept-balanced multimodal dataset comprising 558,000 image-text pairs, annotated using BLIP-generated captions. It is designed to support feature alignment during the pretraining of vision-language models. +- Valentini-Botinhao Speech Dataset is a parallel corpus of clean and noisy speech recordings. It is widely used in training and evaluating speech enhancement and text-to-speech (TTS) systems, featuring $48\mathrm{kHz}$ audio from multiple speakers under various noise conditions. + +Stage 2 and 3. For these stages, we adopt our newly proposed TriSense-2M dataset, applying a 9:1 training-to-testing split. This results in 1.9 million training samples and 0.1 million test samples. The training data is further partitioned into approximately 880K samples for Stage 2 and 1.12M samples for Stage 3. + +To ensure the model also retains general video understanding capabilities, we supplement the training data with a portion of LLaVA-Video-178K [37], which includes video captioning, open-ended QA, and multiple-choice QA tasks. This mixed-task dataset helps the model develop broader understanding skills beyond temporal reasoning. + +Table 10: Details of test set. + +
Num. VideosNum. EventsAvg. Event DurationTotal Tasks
3805114157.23s91320
+ +To avoid massive evaluation time, we extract 11,415 challenging samples from the 0.1M test set, as shown in Table 10 using two filtering criteria: 1) The majority of events should occur in the middle portion of the video rather than the beginning. 2) Captions must contain at least 20 words. Evaluation is conducted using a single A100 SXM4 80GB GPU with a batch size of 1, requiring approximately 8-10 hours to complete. All models in comparison are evaluated on this same test subset, using their officially recommended hyperparameters (e.g., number of frames, temperature, top-p, etc.) + +# A.3 Detailed training settings + +Our multimodal framework incorporates dedicated encoders for each modality. For the visual modality, we adopt openai/clip-vit-large-patch14-336 [25]; for audio and speech modalities, we employ BEATS_iter3+ (AS2M) (cpt2) [4] and Whisper-large-V3 [26], respectively. As for the large language model (LLM) backbone, we select Mistral-7B [17], initialized from TRACE [11], instead of using other LLM backbones, rather than using other LLM backbones. This choice is motivated by TRACE's prior training on large-scale temporal understanding data, which equips it with stronger temporal reasoning capabilities. The maximum context length is configured to 4096 tokens. + +During training, videos are resampled at 1 frame per second (fps) to improve efficiency—this step is omitted during inference to retain full fidelity. This resampling reduces input redundancy and accelerates training. + +In Stage 1, we train the model with a batch size of 512 and single-frame input, completing within 10 hours using $4 \times \mathrm{A}100\mathrm{SXM}4 - 80\mathrm{GB}$ GPUs. Stage 2 and 3 are conducted on $16 \times \mathrm{A}100\mathrm{SXM}4 - 80\mathrm{GB}$ GPUs with batch sizes of 128 and 256, respectively. Stage 2 requires approximately 3.5 days to complete, and stage 3 requires 7 days to finish. We use DeepSpeed Zero2 because the BEATs model does not function properly under Zero3 settings. Further details on datasets and hyperparameters are provided in 11. + +Table 11: Training configurations and hyperparameters by stage. + +
SettingsStage 1Stage 2 & Stage3
Computation4×A100SXM4-80GB.16×A100SXM4-80GB
Vision Encoderclip-vit-large-patch14-336clip-vit-large-patch14-336
Audio EncoderBEATs_iter3+ (AS2M) (cpt2)BEATs_iter3+ (AS2M) (cpt2)
Speech EncoderWhisper-large-V3Whisper-large-V3
DeepSpeed StageZero2 OffloadZero2 Offload
LLM BackboneMistral-7B-v0.2Mistral-7B-v0.2
Batch Size512128 & 256
Num Frames164
Frame SampleUniformUniform
Train Epochs21
Learning Rate1e-35e-6
LR SchedulerCosineCosine
Model Max Length40964096
Training Duration10 Hours3.5 days & 5.5 days
+ +# B More details of TriSense-2M + +# B.1 Training Generator and Judger + +This section describes the data generation and manual filtering process used to prepare training data for both the Generator and the Judger. To ensure efficiency and quality in omni-modal caption generation, we first utilize GPTo1 [16] to produce high-quality annotation samples for Supervised Fine-tuning (SFT). The specific prompts used for this process are shown in 5. These prompts serve a dual purpose: they are used to generate training data via GPT and also function as system prompts during the SFT of both the Generator and the Judger. + +To further enhance caption quality, we implement a two-stage scoring mechanism. After captions are generated, GPT conducts a self-evaluation. Then, a separate GPT instance provides an additional evaluation to filter out low-quality samples. Following this automated scoring, we conduct manual sampling to verify consistency and ensure a high quality standard is met. + +Data is generated in batches of 1,000 samples. From each batch, we randomly select 500 samples for manual review to evaluate the generated content and the reliability of GPT's scoring. If over $80\%$ of the reviewed samples meet our quality criteria, the batch is retained; otherwise, it is discarded. + +Ultimately, we curate 10,000 training samples for the Generator and 3,000 samples for the Judger. Both models are trained for 3 epochs to establish effective captioning and judging capabilities. + +# B.2 Training data format + +This section outlines the data format used for training TriSense. We adopt a ShareGPT-style format, where each training sample consists of 8 conversation rounds, each corresponding to a different modality combination. The tasks and settings within these rounds are randomized—for instance, one round might involve a VS-SC task, while the next could be an AVS-SC or V-MR task. + +Following the approach in TRACE, we use special tokens such as $\langle \mathrm{sync}\rangle$ and $\langle \mathrm{time}\rangle$ to signal the model to switch between different prediction heads. An example of the data structure is illustrated in 6. + +# C Additional Experiments. + +# C.1 More ablation studies + +We conduct small-scale ablation studies on slot compression and task-specific heads. Specifically, we randomly selected 20K samples and trained for 3 epochs, initializing all weights from [11]. All experiments were conducted with a sampled frame rate of 64. In terms of token compression, we explored different slot configurations, including 8, 32, and 64. For the prediction head, we removed + +You are a helpful assistant designed to output CAPTIONS and JSON. Given three distinct captions describing audio, visual, and speech scenarios, please generate omni-modal captions for the following combinations: Audio-Visual-Speech (AVS), Visual-Speech (VS), and Audio-Visual (AV). + +# Input Examples: + +Audio: {audio\_caption} +Visual: {visual\caption} +Speech: {speech\caption} + +# You should generate captions according to the following rules: + +1. The generated captions must preserve all essential information from the original captions. + +2. Do not introduce any content that is not present in the original captions. +3. Do not copy verbatim from the original captions; instead, paraphrase the key information and incorporate it into the new caption. +4. Each generated caption must not exceed 200 words. + +When you generate captions, assign a score for yourself from 1 to 5. If the you think the quality is very poor, assign a score of 1. If the generated captions satisfy approximately $80\%$ of the above criteria, assign a score of 3. If they fully satisfy all criteria $(100\%)$ , assign a score of 5. + +Please think carefully and provide your answer in JSON format as follows: { 'AVS';',', 'AV';',', 'VS';',', 'Score';','}. Note that only one caption should be provided for each of AVS, AV, and VS. You MUST respond in JSON format. + +You are a helpful assistant designed to evaluate caption quality. Given three original captions and three generated omni-modal captions, please assess the quality of the generated captions based on the following criteria: + +1. The generated captions must include all key information from the original captions. +2. No new content may be introduced beyond what is present in the original captions. +3. The generated captions must not copy text directly from the original captions; instead, they should paraphrase and incorporate the essential information. +4. Each generated caption must not exceed 200 words. + +# Input Examples: + +Audio: {audio Caption} +Visual:{visual_caption} +Speech: {speech Caption} +AVS: {audio-visual-speech Caption} +VS: [visual-speech Caption] +AV: {audio-visual Caption} + +Assign a score from 1 to 5. If the quality is very poor, assign a score of 1. If the generated captions satisfy approximately $80\%$ of the above criteria, assign a score of 3. If they fully satisfy all criteria $(100\%)$ , assign a score of 5. + +Please analyze carefully and provide your evaluation in the following JSON format: {"AVS": "", "AV": "", "VS": "", "Score": ""} Note that you should provide only one score and one caption for each of AVS, AV, and VS. You MUST respond in JSON format. + +Figure 5: Prompts used for training the Generator and Judger. The left prompt guides GPT in generating omni-modal captions for the Generator using audio, visual, and speech inputs. The right prompt is used to train the Judger by instructing GPT to assess the quality of generated captions based on coverage, accuracy, and paraphrasing. During data creation, samples are randomly selected and manually filtered to ensure high-quality training data. + +the Time Head and instead encoded temporal information into special tokens, which were injected into the LLM. Prediction was then performed using only the LM Head. + +Table 12: Ablation study on the slot compression [12]. + +
AVS-SCAVS-MRAvg. Event Duration
Slot=8B: 1.6 M: 3.1 R: 13.1 C: 1.9IoU=0.5: 0.3 IoU=0.7: 0.1
Slot=16B: 1.8 M: 3.4 R: 14.6 C: 2.4IoU=0.5: 0.4 IoU=0.7: 0.3
Slot=32B: 1.8 M: 3.4 R: 15.6 C: 2.4IoU=0.5: 0.4 IoU=0.7: 0.3
Slot=64B: 1.9 M: 3.5 R: 16.1 C: 2.4IoU=0.5: 0.5 IoU=0.7: 0.2
+ +Table 13: Ablation study on the task-specific heads. + +
AVS-SCAVS-MR
Unified HeadB: 1.5 M: 3.1 R: 13.2 C: 1.1IoU=0.5: 0.2 IoU=0.7: 0.1
Time Head + LM HeadB: 1.8 M: 3.4 R: 14.6 C: 2.4IoU=0.5: 0.4 IoU=0.7: 0.3
+ +As shown in Table 12, for slot compression, although reducing the compression rate (by increasing the number of slots) can bring slight performance improvements, it also results in a significant increase in computational overhead. Therefore, the benefits are subject to diminishing returns. As shown in the table, setting the slot to 16 currently offers a good balance between performance and computational cost. For task-specific head in Table 13, using a single unified head alone leads to a significant drop in performance, indicating that the time head provides additional temporal information which is beneficial to the model. There may be better unified solutions in the future. + +We also provide more ablation studies in Table 14 and Table 15 for different branches of the Query-Based Connector based on the same 20K randomly selected samples as above. For example, AV-Only in the table means we discarded the Speech branch. During training, all the weights are initialized from [11], and LLM backbone is set to frozen. Results show that all three modalities contribute positively to all tasks, with the full AVS configuration performing best. + +# Dataset Format Example of ShareGPT + +```jsonl +{ +"video": "ZEqicUE2R0l.mp4", +"conversations": +{ +"from": "human", +"value": "