Buckets:
Title: Diffusion Model Based Posterior Sampling for Noisy Linear Inverse Problems
URL Source: https://arxiv.org/html/2211.12343
Markdown Content: Back to arXiv
This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.
Why HTML? Report Issue Back to Abstract Download PDF Abstract 1Introduction 2Background 3Method 4Experiments 5Discussion and Conclusion References
HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.
failed: bibentry failed: kotex
Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.
License: CC BY 4.0 arXiv:2211.12343v4 [cs.LG] 20 Oct 2024 Diffusion Model Based Posterior Sampling for Noisy Linear Inverse Problems \NameXiangming Meng \Emailxiangmingmeng@intl.zju.edu.cn \addrZhejiang University - University of Illinois Urbana-Champaign Institute Zhejiang University Haining 314400 China \NameYoshiyuki Kabashima \Emailkaba@phys.s.u-tokyo.ac.jp \addrInstitute for Physics of Intelligence & Department of Physics The University of Tokyo Tokyo 113-0033 Japan Abstract
With the rapid development of diffusion models and flow-based generative models, there has been a surge of interests in solving noisy linear inverse problems, e.g., super-resolution, deblurring, denoising, colorization, etc, with generative models. However, while remarkable reconstruction performances have been achieved, their inference time is typically too slow since most of them rely on the seminal diffusion posterior sampling (DPS) framework and thus to approximate the intractable likelihood score, time-consuming gradient calculation through back-propagation is needed. To address this issue, this paper provides a fast and effective solution by proposing a simple closed-form approximation to the likelihood score. For both diffusion and flow-based models, extensive experiments are conducted on various noisy linear inverse problems such as noisy super-resolution, denoising, deblurring, and colorization. In all these tasks, our method (namely DMPS) demonstrates highly competitive or even better reconstruction performances while being significantly faster than all the baseline methods.
keywords: Inverse problems; diffusion models; flow-based models; image restoration. 1Introduction
Many problems in science and engineering such as computer vision and signal processing can be cast as the following noisy linear inverse problems:
π²
ππ± 0 + π§ ,
(1)
where π β β π Γ π is a (known) linear mixing matrix, π§ βΌ π© β’ ( π§ ; 0 , π 2 β’ π ) is an i.i.d. additive Gaussian noise, and the goal is to recover the unknown target signal π± 0 β β π Γ 1 from the noisy linear measurements π² β β π Γ 1 . Notable examples include a wide class of image restoration tasks like super-resolution (SR) Ledig et al. (2017), colorization Zhang et al. (2016), denoising Buades et al. (2005), deblurring Yuan et al. (2007), inpainting Bertalmio et al. (2000), as well as the well-known compressed sensing (CS) CandΓ¨s et al. (2006); CandΓ¨s and Wakin (2008) in signal processing. One big challenge of these linear inverse problems is that they are ill-posed OβSullivan (1986), i.e., the solution to (1) is not unique (even in the noiseless case). This problem can be tackled from a Bayesian perspective: suppose that the target signal π± follows a proper prior distribution π β’ ( π± ) , given noisy observations π² , one can perform posterior sampling from π β’ ( π± 0 | π² ) to recover π± 0 . Hence, an accurate prior π β’ ( π± 0 ) is crucial in recovering π± 0 . Various kinds of priors or structure constraints have been proposed, including sparsity CandΓ¨s and Wakin (2008), low-rank Fazel et al. (2008), total variation CandΓ¨s et al. (2006), just to name a few. However, such handcrafted priors might fail to capture the capture more rich structure of natural signals Ulyanov et al. (2018).
Figure 1:Typical results for different image restoration tasks on CelebA-HQ 256 Γ 256 validation set, along with the average inference time in seconds. It can be seen that our method (DMPS) achieves highly competitive or even better reconstruction performances with much less inference time. For a fair comparison, all the algorithms are run on the same flow-based model with NFE=50.
With the recent advent of diffusion models Sohl-Dickstein et al. (2015); Song and Ermon (2019); Ho et al. (2020); Dhariwal and Nichol (2021); Rombach et al. (2022) and flow-based models Lipman et al. (2022); Liu et al. (2022); Albergo et al. (2023); Ma et al. (2024), there has been a surge of interests in applying them to solve the linear inverse problems with remarkable performances Kadkhodaie and Simoncelli (2020, 2021); Jalal et al. (2021a, b); Kawar et al. (2021, 2022); Chung et al. (2022b, a); Wang et al. (2022); Meng and Kabashima (2023, 2024); Pokle et al. (2023). One fundamental challenge in this field is computing the score of noise-perturbed likelihood π β’ ( π² | π± π‘ ) , i.e., β π± π‘ log β‘ π β’ ( π² | π± π‘ ) , where π± π‘ is a noise-perturbed version of π± 0 at time instance π‘ defined by the forward process of DM Ho et al. (2020); Song and Ermon (2019). This is because while β π± π‘ log β‘ π β’ ( π² | π± π‘ ) is easily obtained for π‘
0 from (1), it is intractable for general π‘
0 . To address this challenge, most diffusion and flow-based methods adopt the diffusion posterior sampling (DPS) framework Chung et al. (2022a) which leverages the Tweedieβs formula Robbins (1992) to obtain a posterior estimate of π± π . While DPS and its variants achieve excellent reconstruction performances, they suffer from a big disadvantage that their inference speed is very slow due to the time-consuming gradient calculation through back-propagation.
In this paper, we take an alternative perspective and provide a simple fast solution for solving the noisy linear inverse problems with diffusion or flow-based models by proposing a closed-from approximation to the intractable function β π± π‘ log β‘ π β’ ( π² | π± π‘ ) . Our primary goal is to reduce the inference time of existing methods with minimal degradation, rather than to compete with state-of-the-art performance. The key observation is that, the noise-perturbed likelihood π β’ ( π² | π± π‘ )
β« π β’ ( π² | π± 0 ) β’ π β’ ( π± 0 | π± π‘ ) β’ π π± 0 is unavailable due to the intractability of the reverse transition probability π β’ ( π± 0 | π± π‘ ) , so that one can obtain a closed-form approximation of it assuming an uninformative prior π β’ ( π± 0 ) . Interestingly, such assumption is asymptotically accurate when the perturbed noise in π± π‘ negligibly small. The resultant algorithm is denoted as Diffusion Model based Posterior Sampling (DMPS), one approach that applies to both diffusion and flow-based models. Compared with the seminal DPS and its variants such as PGDM, thanks to the proposed closed-from approximation, no back-propagation through the pre-trained model is needed, thus significantly reducing the inference time. To verify its efficacy, a variety of experiments on different linear inverse problems such as image super-resolution, denoising, deblurring, colorization, are conducted. Remarkably, as shown in Figure 1, in all these tasks, despite its simplicity, DMPS achieves highly competitive or even better reconstruction performances, while the running time is significantly reduced.
2Background
Diffusion models (DM) Song and Ermon (2019); Ho et al. (2020); Dhariwal and Nichol (2021); Song et al. (2023) and Flow-based models (such as flow matching, rectified flow) Lipman et al. (2022); Liu et al. (2022); Albergo et al. (2023); Ma et al. (2024) can be seen as a unified class of probabilistic generative models that learn to turning random noise into data samples π± 0 βΌ π β’ ( π± 0 ) . The forward time-dependent process π± 0 β π± 1 β β― β π± π can be described as follows:
π± π‘
π π‘ β’ π± 0 + π π‘ β’ π ,
(2)
where π π‘ is a decreasing function of π‘ , π π‘ is an increasing function of π‘ , and π βΌ π© β’ ( π , π ) is an i.i.d. standard Gaussian noise. Equivalently, the forward process (2) is modeled as
π β’ ( π± π‘ | π± 0 )
π© β’ ( π± π‘ ; π π‘ β’ π± 0 , π π‘ 2 β’ π ) .
(3)
Both diffusion models and flow-based models aim to reverse the forward process (2) and generate new samples from a distribution that approximates the target data distribution π β’ ( π± 0 ) .
Diffusion Models: Diffusion models reverse the forward process (2) by performing a denoising task for each step, i.e., predicting the noise π from π± π‘ . In the seminal work of DDPM Ho et al. (2020), π π‘
πΌ Β― π‘ , π π‘ 2
1 β πΌ Β― π‘ , where πΌ Β― π‘
β π
1 π‘ πΌ π , πΌ π‘
1 β π½ π‘ , and 0 < π½ 1 < π½ 1 < β― < π½ π < 1 Ho et al. (2020). Denote s π½ β’ ( π± π‘ , π‘ ) as the noise approximator from π± π‘ , one can generate samples following the estimated reverse process Ho et al. (2020) as
π± π‘ β 1
1 πΌ π‘ β’ ( π± π‘ β 1 β πΌ π‘ 1 β πΌ Β― π‘ β’ s π½ β’ ( π± π‘ , π‘ ) ) + π½ π‘ β’ π³ π‘ ,
(4)
where π³ π‘ βΌ π© β’ ( π , π ) is an i.i.d. standard Gaussian noise. Note that in the variant ADM in Dhariwal and Nichol (2021), the reverse noise variance π½ π‘ is learned as { π ~ π‘ } π‘
1 π , which further improves the performances of DDPM.
Diffusion models are also known as score-based generated models since the denoising process is equivalent to approximating the score function β π± π‘ log β‘ π β’ ( π± π‘ ) Song and Ermon (2019); Song et al. (2020). For example, for DDPM, there is a one-to-one mapping between s π½ β’ ( π± π‘ , π‘ ) and β π± π‘ log β‘ π β’ ( π± π‘ )
β π± π‘ log β‘ π β’ ( π± π‘ )
β 1 1 β πΌ Β― π‘ β’ s π½ β’ ( π± π‘ , π‘ ) .
(5)
Flow-based Models: Flow-based models can be viewed as a generalization of diffusion models Lipman et al. (2022); Liu et al. (2022); Albergo et al. (2023); Ma et al. (2024), which introduce a probability ODE with a velocity field Lipman et al. (2022); Ma et al. (2024)οΌ
π± Λ π‘
π― β’ ( π± π‘ , π‘ ) ,
(6)
where π― β’ ( π± , π‘ ) can be obtained as the conditional expectation π― β’ ( π± , π‘ )
πΌ β’ [ π± Λ π‘ | π± π‘
π± ] . Flow-based models solve the probability ODE (6) backwards by learning the velocity field π― β’ ( π± , π‘ ) using a neural network v π½ β’ ( π± , π‘ ) , and a first-order ODE solver can be realized as follows:
π± π‘ β 1
π± π‘ β v π½ β’ ( π± π‘ , π‘ ) β’ Ξ π‘ ,
(7)
where Ξ π‘ is the sampling time interval. Interestingly, the score function β π± π‘ log β‘ π β’ ( π± π‘ ) can also be expressed in terms of the velocity field Ma et al. (2024)
β π± π‘ log β‘ π β’ ( π± π‘ )
π π‘ β 1 β’ π π‘ β’ v π½ β’ ( π± π‘ , π‘ ) β π Λ π‘ β’ π± π‘ π Λ π‘ β’ π π‘ β π π‘ β’ π Λ π‘ .
(8)
Previous Methods with Diffusion and Flow-based Models: The problem of reconstructing π± 0 from noisy π² in (1) can be cast as performing posterior inference, i.e.,
π β’ ( π± 0 | π² )
π β’ ( π± 0 ) β’ π β’ ( π² | π± π ) π β’ ( π² ) ,
(9)
where π β’ ( π± 0 | π² ) is the posterior distribution. Ideally, one can directly train diffusion or flow-based models using samples from π β’ ( π± | π² ) . However, such a supervised approach is neither efficient nor flexible and most previous methods adopt an unsupervised approach Jalal et al. (2021a); Chung et al. (2022a); Song et al. (2022); Pokle et al. (2023): given a pre-trained diffusion model or flow-based model, one treats it as an implicit prior π β’ ( π± 0 ) and then performs posterior sampling through a reverse sampling process π± π β β― β’ π± π‘ β π± π‘ β 1 β β― β π± 0 . The main challenge is thus how to incorporate information of π² within such reverse sampling process. Interestingly, while diffusion models and flow-based models admit slightly different forms, there exists a principled way thanks to the simple relation from the Bayesβ rule (9),
β π± π‘ log β‘ π β’ ( π± π‘ | π² )
β π± π‘ log β‘ π β’ ( π± π‘ ) + β π± π‘ log β‘ π β’ ( π² | π± π‘ ) ,
(10)
where π β’ ( π± π‘ | π² ) is the score of posterior distribution (we call posterior score), which is the sum of the prior score β π± π‘ log β‘ π β’ ( π± π‘ ) , and the likelihood score β π± π‘ log β‘ π β’ ( π² | π± π‘ ) . Given a pre-trained diffusion model or flow-based model, the prior score β π± π‘ log β‘ π β’ ( π± π‘ ) can be readily obtained from the pre-trained model outputs thanks to the intrinsic connections (5) (8). However, while β π± π‘ log β‘ π β’ ( π² | π± π‘ ) can be readily obtained from (1) when π‘
0 , it becomes intractable in the general case for π‘
0 Chung et al. (2022a). To see this, one can equivalently write π β’ ( π² | π± π‘ ) as
π β’ ( π² | π± π‘ )
β« π β’ ( π² | π± 0 ) β’ π β’ ( π± 0 | π± π‘ ) β’ π π± 0 ,
(11)
where from the Bayesβ rule,
π β’ ( π± 0 | π± π‘ )
π β’ ( π± π‘ | π± 0 ) β’ π β’ ( π± 0 ) β« π β’ ( π± π‘ | π± 0 ) β’ π β’ ( π± 0 ) β’ π π± 0 .
(12)
For both diffusion and flow-based models, although the forward transition probability π β’ ( π± π‘ | π± 0 ) is exactly known as (3), the reverse transition probability π β’ ( π± 0 | π± π‘ ) is difficult to obtain. Consequently, the remaining key challenge is the calculation of the noise-perturbed likelihood score β π± π‘ log β‘ π β’ ( π² | π± π‘ ) . A variety of methods Jalal et al. (2021a); Chung et al. (2022a); Song et al. (2022); Pokle et al. (2023) have been proposed to approximate β π± π‘ log β‘ π β’ ( π² | π± π‘ ) (or equivalently π β’ ( π² | π± π‘ ) ) and most of them build on the seminal work DPS Chung et al. (2022a) which leverages the Tweedieβs formula to obtain the posterior estimate of π± 0 Robbins (1992); Chung et al. (2022a)οΌ
π± ^ 0 β’ ( π± π‘ ) := πΌ β’ [ π± 0 | π± π‘ ]
1 π π‘ β’ ( π± π‘ + π π‘ 2 β’ β π± π‘ log β‘ π π‘ β’ ( π± π‘ ) ) ,
(13)
where β π± π‘ log β‘ π π‘ β’ ( π± π‘ ) is approximated by the neural network as (5) and (8) for diffusion and flow-based models, respectively. In particular, DPS uses a Laplace approximation π β’ ( π² | π± π‘ ) β π β’ ( π² | π± ^ 0 β’ ( π± π‘ ) )
π© β’ ( π β’ π± ^ 0 β’ ( π± π‘ ) ; π π¦ 2 β’ π ) , while both PGDM Song et al. (2022) and OT-ODE Pokle et al. (2023) use an approximation π β’ ( π² | π± π‘ ) β π© β’ ( π β’ π± ^ 0 β’ ( π± π‘ ) ; πΎ π‘ 2 β’ ππ π + π π¦ 2 β’ π ) , where πΎ π‘ is a hyper-parameter for the variance term. Nevertheless, while DPS and its variants can achieve excellent reconstruction performances, they suffer from a significant drawback: the inference speed is very slow due to the time-consuming gradient of the pre-trained model output w.r.t. π± π‘ in calculating the likelihood β π± π‘ log β‘ π β’ ( π² | π± π‘ ) .
3Method
In this section, we propose a fast and efficient closed-form solution for the intractable likelihood score β π± π‘ log β‘ π β’ ( π² | π± π‘ ) , termed as noise-perturbed pseudo-likelihood score. We first derive the results of β π± π‘ log β‘ π β’ ( π² | π± π‘ ) under the general settings (2-3), and then apply our results in diffusion and flow-based models, respectively.
3.1Noise-Perturbed Pseudo-Likelihood Score
To tackle the intractability difficulty of β π± π‘ log β‘ π β’ ( π² | π± π‘ ) , we introduce a simple approximation under the following assumption:
Assumption 1
(uninformative prior) The prior π β’ ( π± 0 ) (12) is uninformative (flat) w.r.t. π± π‘ so that π β’ ( π± 0 | π± π‘ ) β π β’ ( π± π‘ | π± 0 ) , where β denotes equality up to a constant scaling.
Note that while the uninformative prior assumption appears crude at first sight, it is asymptotically accurate when the perturbed noise in π± π‘ becomes negligible, as verified in Appendix A.
Under Assumption 1, we obtain a simple closed-form approximation of β π± π‘ log β‘ π β’ ( π² | π± π‘ ) called noise-perturbed pseudo-likelihood score and denote as β π± π‘ log β‘ π ~ β’ ( π² | π± π‘ ) , as shown in Theorem 3.1.
Theorem 3.1.
(noise-perturbed pseudo-likelihood score for (2)) For the general forward process (2), under Assumption 1, the noise-perturbed likelihood score β π± π‘ log β‘ π β’ ( π² | π± π‘ ) for π²
ππ± π + π§ in (1) admits a closed-form
β π± π‘ log β‘ π β’ ( π² | π± π‘ ) β β π± π‘ log β‘ π ~ β’ ( π² | π± π‘ )
1 π π‘ β’ π π β’ ( π π¦ 2 β’ π + π π‘ 2 π π‘ 2 β’ ππ π ) β 1 β’ ( π² β 1 π π‘ β’ ππ± π‘ ) .
(14)
ππ«π¨π¨π . From Assumption 1, we have π β’ ( π± 0 | π± π‘ ) β π β’ ( π± π‘ | π± 0 ) . Recall that for the forward process (2), π β’ ( π± π‘ | π± 0 ) is Gaussian (2). By completing the squares w.r.t. π± 0 , an approximation for π β’ ( π± 0 | π± π‘ ) can be derived as follows:
π β’ ( π± 0 | π± π‘ ) β π© β’ ( π± 0 ; π± π‘ π π‘ , π π‘ 2 π π‘ 2 β’ π ) ,
(15)
whereby π± 0 can be equivalently written as π± 0
π± π‘ π π‘ + π π‘ π π‘ β’ π° , where π° βΌ π© β’ ( π , π ) . Thus, from (1), we obtain an alternative representation of π²
π²
ππ± π‘ π π‘ + π π‘ π π‘ β’ ππ° + π§ .
(16)
After some simple algebra, the likelihood π β’ ( π² | π± π‘ ) can be approximated as π ~ β’ ( π² | π± π‘ )
π ~ β’ ( π² | π± π‘ )
π© β’ ( π² ; ππ± π‘ π π‘ , π π¦ 2 β’ π + π π‘ 2 π π‘ 2 β’ ππ π ) ,
(17)
where π ~ β’ ( π² | π± π‘ ) is used to denote the pseudo-likelihood as opposed to the exact π β’ ( π² | π± π‘ ) due to Assumption 1. Using (17), one can readily obtain a closed-form solution for the noise-perturbed pseudo-likelihood score β π± π‘ log β‘ π ~ β’ ( π² | π± π‘ ) as (14), which completes the proof. β
As shown in Theorem 3.1, now we obtain a simple closed-form approximation for the intractable likelihood score, which is much easier to implement compared to DPS and its variants. In the special case when π itself is row-orthogonal, i.e., ππ π is diagonal, the matrix inversion is trivial and (14) simply reduces to
[ β π± π‘ log β‘ π ~ β’ ( π² | π± π‘ ) ] π
π π π β’ ( π² β 1 π π‘ β’ ππ± π‘ ) π π¦ 2 β’ π π‘ + π π‘ 2 π π‘ 2 β’ β π π β 2 2 .
(18)
where [ β ] π is the π -th element and π π is the π -th row of π . For general matrices π , such an inversion is essential but it can also be efficiently implemented by resorting to singular value decomposition (SVD) of π , as shown in Theorem 3.2.
Corollary 3.2.
(efficient computation via SVD) For the general forward process (2), the noise-perturbed pseudo-likelihood score β π± π‘ log β‘ π β’ ( π² | π± π‘ ) in (14) of Theorem 3.1 can be equivalently computed as
β π± π‘ log β‘ π β’ ( π² | π± π‘ ) β β π± π‘ log β‘ π ~ β’ ( π² | π± π‘ )
1 π π‘ β’ π β’ πΊ β’ ( π π¦ 2 β’ π + π π‘ 2 π π‘ 2 β’ πΊ 2 ) β 1 β’ π π β’ ( π² β 1 π π‘ β’ ππ± π‘ ) ,
(19)
where π
π β’ πΊ β’ π π is the SVD of π and πΊ 2 denotes element-wise square of πΊ .
ππ«π¨π¨π . The result is straightforward from Theorem 3.1. β
Remark 3.3.
Thanks to SVD, there is no need to compute the matrix inversion in (14) for each π‘ . Instead, one simply needs to perform SVD of π only once and then compute β π± π‘ log β‘ π ~ β’ ( π² | π± π‘ ) by (19), which is quite simple since πΊ is a diagonal matrix.
3.2DMPS: Diffusion Model Based Posterior Sampling
After obtaining the approximate results of the likelihood score function β π± π‘ log β‘ π β’ ( π² | π± π‘ ) , we can easily modify the sampling equations of the original diffusion and flow-based models from a unified Bayesian perspective. Here we introduce a simple yet universal procedure demonstrating how we can achieve this for both diffusion and flow-based models.
Step 1: Reformulate the original sampling equations for unconditional generation in the terms of the prior score β π± π‘ log β‘ π β’ ( π± π‘ ) . This step requires building connections between the generative model (either diffusion or flow-based models) output with the score function β π± π‘ log β‘ π β’ ( π± π‘ ) . For example, given the connections (5) (8), the original sampling equation (4) for DDPM and (7) for flow-based models can be rewritten using β π± π‘ log β‘ π β’ ( π± π‘ ) as follows
DDPM: β’ π± π‘ β 1
1 πΌ π‘ β’ ( π± π‘ + ( 1 β πΌ π‘ ) β’ β π± π‘ log β‘ π β’ ( π± π‘ ) ) + π½ π‘ β’ π³ π‘ ,
(20)
Flow-based: β’ π± π‘ β 1
π±
π‘
β
(
π
Λ
π‘
π
π‘
β’
π±
π‘
+
π
π‘
β’
(
π
Λ
π‘
β’
π
π‘
β
π
π‘
β’
π
Λ
π‘
)
π
π‘
β’
β
π±
π‘
log
β‘
π
β’
(
π±
π‘
)
)
β’
Ξ
π‘
,
(21)
DDPM:
β’
π±
π‘
β
1
1 πΌ π‘ ( π± π‘ + ( 1 β πΌ π‘ ) β π± π‘ log π ( π± π‘ ) + β π± π‘ log π ( π² | π± π‘ ) ) ) + π½ π‘ π³ π‘ ,
(22)
Flow-based: β’ π± π‘ β 1
π± π‘ β ( π Λ π‘ π π‘ π± π‘ + π π‘ β’ ( π Λ π‘ β’ π π‘ β π π‘ β’ π Λ π‘ ) π π‘ β π± π‘ log π ( π± π‘ ) + β π± π‘ log π ( π² | π± π‘ ) ) ) Ξ π‘ ,
(23)
Step 2: Replace the prior score β π± π‘ log β‘ π β’ ( π± π‘ ) in the sampling equations obtained in Step 1 with the posterior score β π± π‘ log β‘ π β’ ( π± π‘ | π² ) as (10). For example, for DDPM and flow-based models, the corresponding sampling equations (22-21) become
DDPM: β’ π± π‘ β 1
1 πΌ π‘ β’ ( π± π‘ + ( 1 β πΌ π‘ ) β’ ( β π± π‘ log β‘ π β’ ( π± π‘ ) + β π± π‘ log β‘ π β’ ( π² | π± π‘ ) ) ) + π½ π‘ β’ π³ π‘ ,
(24)
Flow-based: β’ π± π‘ β 1
π± π‘ β ( π Λ π‘ π π‘ β’ π± π‘ + π π‘ β’ ( π Λ π‘ β’ π π‘ β π π‘ β’ π Λ π‘ ) π π‘ β’ ( β π± π‘ log β‘ π β’ ( π± π‘ ) + β π± π‘ log β‘ π β’ ( π² | π± π‘ ) ) ) β’ Ξ π‘ ,
(25)
Step 3: Replace the prior score β π± π‘ log β‘ π β’ ( π± π‘ ) back in terms of the generative model outputs in the obtained sampling equations in Step 2. Subsequently, taking into account the additional terms due to the addition of likelihood score, we can easily obtain the final posterior sampling equations. For example, for DDPM and flow-based models, the corresponding sampling equations (24-25) finally become
DDPM: β’ π± π‘ β 1
1 πΌ π‘ β’ ( π± π‘ β 1 β πΌ π‘ 1 β πΌ Β― π‘ β’ π π β’ ( π± π‘ , π‘ ) ) + π½ π‘ β’ π³ π‘ β original sampling equation + 1 β πΌ π‘ πΌ π‘ β’ β π± π‘ log β‘ π β’ ( π² | π± π‘ ) β additional part ,
(26)
Flow-based: β’ π± π‘ β 1
π± π‘ β π― π β’ ( π± π‘ , π‘ ) β’ Ξ π‘ β original sampling equation β π π‘ β’ ( π Λ π‘ β’ π π‘ β π π‘ β’ π Λ π‘ ) π π‘ β’ β π± π‘ log β‘ π β’ ( π² | π± π‘ ) β’ Ξ π‘ β additional part ,
(27)
where the blue part is the addition terms required to incorporate into the original sampling equations to enable posterior sampling from π β’ ( π± 0 | π² ) given π² .
Following the above procedures, we obtain the resultant algorithms for DDPM and flow-based models, as shown in Algorithm 1 and Algorithm 2, respectively. For brevity, we call both algorithms as Diffusion Model based Posterior Sampling (dubbed DMPS) since flow-based models can be viewed as a generalization of diffusion models Albergo et al. (2023). In the DDPM version, the reverse diffusion variance { π ~ π‘ } π‘
1 π is learned as the ADM in Dhariwal and Nichol (2021). Both the two versions of DMPS algorithms can be easily implemented on top of the existing code just by adding two additional simple lines (lines 4-5 in Algorithm 1, lines 8-9 in Algorithm 2) of codes.
Algorithm 1 DMPS (DDPM version)
Input: π² , π , π π¦ 2 , { π ~ π‘ } π‘
1 π , π
Initialization: π± π βΌ π© β’ ( π , π ) , π
π β’ πΊ β’ π π
for π‘
π to 1 do
2 Draw π³ π‘ βΌ π© β’ ( π , π ) π± π‘ β 1
1 πΌ π‘ β’ ( π± π‘ β 1 β πΌ π‘ 1 β πΌ Β― π‘ β’ s π½ β’ ( π± π‘ , π‘ ) ) + π ~ π‘ β’ π³ π‘ β π± π‘ log β‘ π ~ β’ ( π² | π± π‘ )
1 πΌ Β― π‘ β’ π β’ πΊ β’ ( π π¦ 2 β’ π + 1 β πΌ Β― π‘ πΌ Β― π‘ β’ πΊ 2 ) β 1 β’ π π β’ ( π² β 1 πΌ Β― π‘ β’ ππ± π‘ ) π± π‘ β 1
π± π‘ β 1 + π β’ 1 β πΌ π‘ πΌ π‘ β’ β π± π‘ log β‘ π ~ β’ ( π² | π± π‘ ) Output: π± 0 Algorithm 2 DMPS (flow-based version)
Input: π² , π , π π¦ 2 , Ξ π‘
1 / π , π
Initialization: π± π βΌ π© β’ ( π , π ) , π
π β’ πΊ β’ π π
for π‘
π to 1 do
4 π± π‘ β 1
π± π‘ β v π½ β’ ( π± π‘ , π‘ ) β’ Ξ π‘ β π± π‘ log β‘ π ~ β’ ( π² | π± π‘ )
1 π π‘ β’ π β’ πΊ β’ ( π π¦ 2 β’ π + π π‘ 2 π π‘ 2 β’ πΊ 2 ) β 1 β’ π π β’ ( π² β 1 πΌ Β― π‘ β’ ππ± π‘ ) π± π‘ β 1
π± π‘ β 1 β π β’ π π‘ β’ ( π Λ π‘ β’ π π‘ β π π‘ β’ π Λ π‘ ) π π‘ β’ log β‘ π ~ β’ ( π² | π± π‘ ) β’ Ξ π‘ Output: π± 0
Remark: A scaling parameter π > 0 is introduced in both algorithms, similar to classifier guidance diffusion sampling Dhariwal and Nichol (2021). Empirically it is found that the performances are robust to different choices of π as shown in the Appendix B, and we fix π
1.75 for DMPS (DDPM version) and π
2.0 for DMPS (flow-based version) in all the experiments.
4Experiments
In this section, we conduct experiments on a variety of noisy linear inverse problems to demonstrate the efficacy of the proposed DMPS method, for both diffusion models and flow-based models. The code is available at https://github.com/mengxiangming/dmps.
4.1Experimental Setup
Tasks: The tasks we consider include image super-resolution (SR), denoising, deblurring, as well as image colorization. In particular: (a) for image super-resolution (SR), the bicubic downsampling is performed as Chung et al. (2022a); (b) for deblurring, uniform blur of size 9 Γ 9 Kawar et al. (2022) (for DDPM) and Gaussian blur (for flow-based) are used; (c) for colorization, the grayscale image is obtained by averaging the red, green, and blue channels of each pixel Kawar et al. (2022). For all tasks, additive Gaussian noise π§ with π
0.05 is added except the denoising task where a larger noise π§ with π
0.5 is added.
Dataset: Both FFHQ Karras et al. (2019) and CelebA-HQ Karras et al. (2018) are considered. More results on FFHQ-cat, LSUN-bedroom, and AFHQ-cat can be found in the Appendix C.
Pre-trained Diffusion Models: For a fair comparison, we use the same pre-trained model for all the different methods evaluated. For diffusion models, the pre-trained ADM model Choi et al. (2021) is used, available in DDPM-checkpoint. For flow-based models, we use the pre-trained rectified flow model Liu et al. (2022), which is available in flow-checkpoint, and the forward process (3) is specified as π π‘
1 β π‘ , π π‘
π‘ .
Comparison Methods: We compare DMPS with the following methods: DPS Chung et al. (2022a), PGDM Song et al. (2022), and the OT-ODE method Pokle et al. (2023). Actually, OT-ODE can be viewed as the flow-based version of PGDM. For DPS, we also compare two versions: one is the original DDPM version, the other is the flow-based version obtained following the procedures described in Section 3.2.
Metrics: Three widely used metrics are considered, including the standard distortion metric peak signal noise ratio (PSNR) (dB), as well as two popular perceptual metrics: structural similarity index measure (SSIM) Wang et al. (2004) and Learned Perceptual Image Patch Similarity (LPIPS) Zhang et al. (2018).
GPU: All results are run on a single NVIDIA Tesla V100.
super-resolution deblur colorization denoising
Method PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β
DMPS (DDPM, ours) 27.63 0.8450 0.2071 27.26 0.7644 0.2222 21.09 0.9592 0.2738 27.81 0.8777 0.2435 DPS (DDPM) 26.78 0.8391 0.2329 26.50 0.8151 0.2248 11.53 0.7923 0.5755 27.22 0.8969 0.2428 PGDM 27.60 0.8345 0.2077 26.65 0.7458 0.2196 12.15 0.8920 0.3969 27.60 0.8682 0.2425
Table 1:Quantitative comparison (PSNR (dB), SSIM, LPIPS) of different algorithms for different tasks on FFHQ 256 Γ 256 -1k validation dataset. The same pre-trained DDPM model is used.
super-resolution deblur colorization denoising
Method PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β
DMPS (Flow-based, ours) 28.29 0.8011 0.2329 26.21 0.7235 0.2637 23.31 0.8861 0.2901 29.04 0.8166 0.2821 DPS (Flow-based) 28.05 0.7754 0.2266 22.64 0.5787 0.3403 20.92 0.8061 0.3335 27.93 0.7465 0.2882 OT-ODE 27.71 0.7657 0.2302 25.84 0.7084 0.2573 21.67 0.8696 0.3094 22.76 0.3820 0.4778
Table 2:Quantitative comparison (PSNR (dB), SSIM, LPIPS) of different algorithms for different tasks on the validation set of CelebA-HQ. The same pre-trained flow-based model is used. \subfigure
[Super-resolution (SR) ( Γ 4 )] \subfigure[Denoising ( π
0.5 )] \subfigure[colorization] \subfigure[Deblurring (uniform)]
Figure 2:Typical results on FFHQ 256 Γ 256 1k validation set for different noisy linear inverse problems. All the algorithms are based on the same DDPM model. In all cases, the measurements are with Gaussian noise π
0.05 , except denoising where π
0.5 .
Results: First is a quantitative comparison in terms of different metrics. Table 1 shows the quantitative reconstruction performances of different algorithms on diffusion models on the FFHQ dataset, and Table 2 shows the quantitative reconstruction performances of different algorithms on flow-based models with the CelebA-HQ dataset. As shown in Table 1 and Table 2, despite its simplicity, the proposed DMPS achieves highly competitive or even better performances than the baselines.
Second, we make a qualitative comparison between different algorithms for different tasks. Figure 2 shows the typical reconstructed images of different algorithms on diffusion models on the FFHQ dataset; Figure 1 shows the quantitative reconstruction performances of different algorithms on flow-based models with the CelebA-HQ dataset. As shown in Figure 2 and Figure 1, in all tasks, DMPS produces high-quality realistic images which match details of the ground-truth more closely. For example, for super-resolution, please have a look at the ear stud in the first row of Figure2 (a), the hand on the shoulder in the second row of Figure 2 (a), and the background in the second row of Figure 2 (a); for denoising, please see the background door in the first row of Figure 2 (b), and the collar in the second row of Figure 2 (b), and the last row of Figure 1; for colorization, DPS tends to produce over-bright images in colorization while DMPS produces more natural colored images, as shown in Figure 2 (c) and Figure 1, etc.
Method Inference Time [s] DMPS (DDPM, ours) 67.02 DPS (DDPM) 194.42 PGDM 182.35 Method Inference Time [s] DMPS (flow-based, ours) 4.45 DPS (flow-based) 8.04 OT-DOE 6.44 Figure 3:Comparison of the inference time for different methods. Left: Results on DDPM models when NFE=1000, obtained on the SR task for FFHQ 256 Γ 256 . Right: Results on flow-based models when NFE=50, obtained on the SR task for CelebA-HQ 256 Γ 256 .
Finally, we evaluate the inference time of different algorithms, which is one of the key motivation of this paper. Here we would like to emphasize again that the main goal of this paper is not to compete with state-of-the-art performance but rather to provide a fast method. For fair of comparison, for both diffusion and flow-based models, different algorithms uses the same pre-trained model. Figure 3 show the the average running time for different algorithms: Left table shows the results under diffusion models when the number of function evaluation (NFE) is NFE = 1000; Right Table shows the results of different algorithms under flow-based models when NFE = 50. It can be seen that, in both versions, the inference time of the proposed DMPS method is significantly less than other methods, which is much appealing in practical applications.
5Discussion and Conclusion
In this paper, we propose fast and effective closed-form approximation of the intractable noise-perturbed likelihood score, leading to the Diffusion Model based Posterior Sampling (dubbed DMPS). For both diffusion and flow-based models, we evaluate the effectiveness of DMPS on multiple linear inverse problems including image super-resolution, denoising, deblurring, colorization. Despite its simplicity, DMPS achieves highly competitive or even better reconstruction performances, while its inference time of DMPS is significantly faster.
Limitations & Future Work: While DMPS apparently reduces the inference time and achieves competitive reconstruction performances, it still suffers several limitations. First, although memory efficient SVD exists for most practical matrices π of practical interests Kawar et al. (2022), the SVD operation in DMPS still has some implementation difficulty for more general matrices π . Second, it can not be directly applied to the popular latent diffusion models such as stable diffusion Rombach et al. (2022), which is widely used due to its efficiency. Addressing these limitations are left as future work.
Acknowledgements
X. Meng would like to sincerely thank Yichi Zhang and Jim Yici Yan from UIUC for helpful discussions. This work was supported by NSFC No. 62306277, and the Fundamental Research Funds for the Zhejiang Provincial Universities Grant No. K20240090, The Japan Science and Technology Agency (JST) Grant No. JPMJCR1912, and The Japan Society for the Promotion of Science (JSPS) Grant No. JP22H05117.
References Albergo et al. (2023) β Michael S Albergo, Nicholas M Boffi, and Eric Vanden-Eijnden.Stochastic interpolants: A unifying framework for flows and diffusions.arXiv preprint arXiv:2303.08797, 2023. Bertalmio et al. (2000) β Marcelo Bertalmio, Guillermo Sapiro, Vincent Caselles, and Coloma Ballester.Image inpainting.In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 417β424, 2000. Buades et al. (2005) β Antoni Buades, Bartomeu Coll, and Jean-Michel Morel.A review of image denoising algorithms, with a new one.Multiscale modeling & simulation, 4(2):490β530, 2005. CandΓ¨s and Wakin (2008) β Emmanuel J CandΓ¨s and Michael B Wakin.An introduction to compressive sampling.IEEE signal processing magazine, 25(2):21β30, 2008. CandΓ¨s et al. (2006) β Emmanuel J CandΓ¨s, Justin Romberg, and Terence Tao.Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information.IEEE Transactions on information theory, 52(2):489β509, 2006. Choi et al. (2021) β Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, and Sungroh Yoon.Ilvr: Conditioning method for denoising diffusion probabilistic models.In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 14347β14356. IEEE, 2021. Chung et al. (2022a) β Hyungjin Chung, Jeongsol Kim, Michael T Mccann, Marc L Klasky, and Jong Chul Ye.Diffusion posterior sampling for general noisy inverse problems.arXiv preprint arXiv:2209.14687, 2022a. Chung et al. (2022b) β Hyungjin Chung, Byeongsu Sim, Dohoon Ryu, and Jong Chul Ye.Improving diffusion models for inverse problems using manifold constraints.arXiv preprint arXiv:2206.00941, 2022b. Dhariwal and Nichol (2021) β Prafulla Dhariwal and Alexander Nichol.Diffusion models beat gans on image synthesis.Advances in Neural Information Processing Systems, 34:8780β8794, 2021. Fazel et al. (2008) β Maryam Fazel, E Candes, Benjamin Recht, and P Parrilo.Compressed sensing and robust recovery of low rank matrices.In 2008 42nd Asilomar Conference on Signals, Systems and Computers, pages 1043β1047. IEEE, 2008. Ho et al. (2020) β Jonathan Ho, Ajay Jain, and Pieter Abbeel.Denoising diffusion probabilistic models.Advances in Neural Information Processing Systems, 33:6840β6851, 2020. Jalal et al. (2021a) β Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alexandros G Dimakis, and Jon Tamir.Robust compressed sensing mri with deep generative priors.Advances in Neural Information Processing Systems, 34:14938β14954, 2021a. Jalal et al. (2021b) β Ajil Jalal, Sushrut Karmalkar, Alex Dimakis, and Eric Price.Instance-optimal compressed sensing via posterior sampling.In International Conference on Machine Learning, pages 4709β4720. PMLR, 2021b. Kadkhodaie and Simoncelli (2021) β Zahra Kadkhodaie and Eero Simoncelli.Stochastic solutions for linear inverse problems using the prior implicit in a denoiser.Advances in Neural Information Processing Systems, 34:13242β13254, 2021. Kadkhodaie and Simoncelli (2020) β Zahra Kadkhodaie and Eero P Simoncelli.Solving linear inverse problems using the prior implicit in a denoiser.arXiv preprint arXiv:2007.13640, 2020. Karras et al. (2018) β Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen.Progressive growing of gans for improved quality, stability, and variation.In International Conference on Learning Representations, 2018. Karras et al. (2019) β Tero Karras, Samuli Laine, and Timo Aila.A style-based generator architecture for generative adversarial networks.In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4401β4410, 2019. Kawar et al. (2021) β Bahjat Kawar, Gregory Vaksman, and Michael Elad.Snips: Solving noisy inverse problems stochastically.Advances in Neural Information Processing Systems, 34:21757β21769, 2021. Kawar et al. (2022) β Bahjat Kawar, Michael Elad, Stefano Ermon, and Jiaming Song.Denoising diffusion restoration models.arXiv preprint arXiv:2201.11793, 2022. Ledig et al. (2017) β Christian Ledig, Lucas Theis, Ferenc HuszΓ‘r, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al.Photo-realistic single image super-resolution using a generative adversarial network.In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4681β4690, 2017. Lipman et al. (2022) β Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le.Flow matching for generative modeling.arXiv preprint arXiv:2210.02747, 2022. Liu et al. (2022) β Xingchao Liu, Chengyue Gong, and Qiang Liu.Flow straight and fast: Learning to generate and transfer data with rectified flow.arXiv preprint arXiv:2209.03003, 2022. Ma et al. (2024) β Nanye Ma, Mark Goldstein, Michael S Albergo, Nicholas M Boffi, Eric Vanden-Eijnden, and Saining Xie.Sit: Exploring flow and diffusion-based generative models with scalable interpolant transformers.arXiv preprint arXiv:2401.08740, 2024. Meng and Kabashima (2023) β Xiangming Meng and Yoshiyuki Kabashima.Quantized compressed sensing with score-based generative models.In International Conference on Learning Representations, 2023. Meng and Kabashima (2024) β Xiangming Meng and Yoshiyuki Kabashima.Qcs-sgm+: Improved quantized compressed sensing with score-based generative models.In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 14341β14349, 2024. OβSullivan (1986) β Finbarr OβSullivan.A statistical perspective on ill-posed inverse problems.Statistical science, pages 502β518, 1986. Pokle et al. (2023) β Ashwini Pokle, Matthew J Muckley, Ricky TQ Chen, and Brian Karrer.Training-free linear image inversion via flows.arXiv preprint arXiv:2310.04432, 2023. Robbins (1992) β Herbert E Robbins.An empirical bayes approach to statistics.In Breakthroughs in Statistics: Foundations and basic theory, pages 388β394. Springer, 1992. Rombach et al. (2022) β Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and BjΓΆrn Ommer.High-resolution image synthesis with latent diffusion models.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684β10695, 2022. Sohl-Dickstein et al. (2015) β Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli.Deep unsupervised learning using nonequilibrium thermodynamics.In International Conference on Machine Learning, pages 2256β2265. PMLR, 2015. Song et al. (2020) β Jiaming Song, Chenlin Meng, and Stefano Ermon.Denoising diffusion implicit models.arXiv preprint arXiv:2010.02502, 2020. Song et al. (2022) β Jiaming Song, Arash Vahdat, Morteza Mardani, and Jan Kautz.Pseudoinverse-guided diffusion models for inverse problems.In International Conference on Learning Representations, 2022. Song and Ermon (2019) β Yang Song and Stefano Ermon.Generative modeling by estimating gradients of the data distribution.Advances in Neural Information Processing Systems, 32, 2019. Song et al. (2023) β Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever.Consistency models.arXiv preprint arXiv:2303.01469, 2023. Ulyanov et al. (2018) β Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky.Deep image prior.In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9446β9454, 2018. Wang et al. (2022) β Yinhuai Wang, Jiwen Yu, and Jian Zhang.Zero-shot image restoration using denoising diffusion null-space model.arXiv preprint arXiv:2212.00490, 2022. Wang et al. (2004) β Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli.Image quality assessment: from error visibility to structural similarity.IEEE transactions on image processing, 13(4):600β612, 2004. Yuan et al. (2007) β Lu Yuan, Jian Sun, Long Quan, and Heungyeung Shum.Image deblurring with blurred/noisy image pairs.In Proceedings of the 34th ACM SIGGRAPH Conference on Computer Graphics, 34th Annual Meeting of the Association for Computing Machineryβs Special Interest Group on Graphics; San Diego, CA; United States, 2007. Zhang et al. (2016) β Richard Zhang, Phillip Isola, and Alexei A Efros.Colorful image colorization.In European conference on computer vision, pages 649β666. Springer, 2016. Zhang et al. (2018) β Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang.The unreasonable effectiveness of deep features as a perceptual metric.In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586β595, 2018. Appendix AVerification of Assumption 1
Here we provide a theoretical support of the uninformative prior assumption 1, or, equivalently, the following Gaussian approximation of the posterior π β’ ( π± 0 | π± π‘ ) :
π β’ ( π± 0 | π± π‘ ) β π© β’ ( π± π‘ π π‘ , π π‘ 2 β’ π ) ,
(28)
Throughout the following derivations, we will drop any additive constants in the log (which translate to normalizing factors), and drop all terms of order π β’ ( π‘ ) .
Let us start with the original Bayesβ formula (using the log form):
log β‘ π β’ ( π± 0 | π± π‘ )
log β‘ π β’ ( π± π‘ | π± 0 ) + log β‘ π 0 β’ ( π± 0 ) β log β‘ π π‘ β’ ( π± π‘ ) ,
(29)
where π 0 β’ ( π± 0 ) and π π‘ β’ ( π± π‘ ) denote the marginal distribution of π± 0 and π± π‘ , respectively.
Since π π‘ β Ξ β’ π‘ β’ ( β )
π π‘ β’ ( β ) + Ξ β’ π‘ β’ β β π‘ β’ π π‘ β’ ( β ) + πͺ β’ ( Ξ β’ π‘ ) for | π‘ | βͺ 1 , there is
log β‘ π 0 β’ ( π± 0 | π± π‘ )
log β‘ π β’ ( π± π‘ | π± 0 ) + log β‘ π π‘ β’ ( π± 0 ) + πͺ β’ ( π‘ ) β log β‘ π π‘ β’ ( π± π‘ ) .
(30)
For (30), we perform a first order Taylor expansion of log β‘ π π‘ β’ ( π± 0 ) around π± π‘ , which yields
log β‘ π 0 β’ ( π± 0 | π± π‘ )
log β‘ π β’ ( π± π‘ | π± 0 ) + log β‘ π π‘ β’ ( π± π‘ ) + β¨ β π± π‘ log β‘ π π‘ β’ ( π± π‘ ) , π± 0 β π± π‘ β© + πͺ β’ ( π‘ ) β log β‘ π π‘ β’ ( π± π‘ )
log β‘ π β’ ( π± π‘ | π± 0 ) + β¨ β π± π‘ log β‘ π π‘ β’ ( π± π‘ ) , π± 0 β π± π‘ β© + πͺ β’ ( π‘ ) .
(31)
Substituting π β’ ( π± π‘ | π± 0 )
π© β’ ( π π‘ β’ π± 0 , π π‘ 2 ) and completing the squares, we obtain:
log β‘ π β’ ( π± 0 | π± π‘ )
β β₯ π± π‘ β π π‘ β’ π± 0 β₯ 2 2 β’ π π‘ 2 + β¨ β π± π‘ log β‘ π π‘ β’ ( π± π‘ ) , π± 0 β π± π‘ β© + πͺ β’ ( π‘ )
β 1 2 β’ π π‘ 2 β’ β₯ π± 0 β π β₯ 2 + πΆ ,
(32)
where πΆ is a constant value and the mean value π is:
π
π± π‘ π π‘ + π π‘ 2 π π‘ 2 β’ β π± π‘ log β‘ π π‘ β’ ( π± π‘ )
(33)
Therefore, we obtain that the posterior distribution π β’ ( π± 0 | π± π‘ ) can be approximated as a Gaussian
π β’ ( π± 0 | π± π‘ ) β π© β’ ( π± π‘ π π‘ + π π‘ 2 π π‘ 2 β’ β π± π‘ log β‘ π π‘ β’ ( π± π‘ ) , π π‘ 2 β’ π )
(34)
Comparing eqs. (28) and (34), we can see that in our result (1), we further ignore the term π π‘ 2 π π‘ 2 β’ β π± π‘ log β‘ π π‘ β’ ( π± π‘ ) in the mean value. This is valid for sufficiently small π‘ since the variance π π‘ 2 is sufficiently small following the special design principle in forward diffusion process. For example, for DDPM and flow-based model considered in our manuscript, π π‘ 2
1 β πΌ Β― π‘ , π π‘ 2
π‘ 2 , respectively.
Reflecting on this derivation, the main idea is that for a sufficiently small π‘ , the Bayesβ rule expansion of π β’ ( π± 0 β£ π± π‘ ) (recall that this is what we need to compute the likelihood score) is primarily influenced by the term π β’ ( π± π‘ β£ π± 0 ) from the forward process, regardless of the prior of π β’ ( π± 0 ) . As a result, the uninformative prior assumption is reasonable for sufficiently small π‘ . In fact, this insight is exactly why in the diffusion models the reverse process and the forward process share the same functional form for sufficiently small time interval. It is worth pointing out that, the validity of the above results does not depend on the underlying distribution π 0 β’ ( π± 0 ) , whether it being a simple Gaussian or a complex distribution as that of a face image.
Figure 4:Comparison of the exact mean and variance of π β’ ( π₯ 0 | π₯ π‘ ) with the pseudo mean and variance under the uninformative assumption, i.e., π β’ ( π₯ 0 | π₯ π‘ ) β π β’ ( π₯ π‘ | π₯ 0 ) .
A toy example: We further consider a toy example to illustrate this where the exact form of π β’ ( π± 0 | π± π‘ ) in (12) can be computed exactly. Assume that π± reduces to a scalar random variable π₯ and the associated prior π β’ ( π₯ ) follows a Gaussian distribution, i.e., π β’ ( π₯ )
π© β’ ( π₯ ; 0 , π 0 2 ) , where π 2 is the prior variance. The likelihood π ( π± π‘ ) | π± 0 ) (3) in this case is simply π β’ ( π₯ π‘ | π₯ 0 )
π© β’ ( π₯ π‘ ; πΌ Β― π‘ β’ π₯ 0 , ( 1 β πΌ Β― π‘ ) ) .
Then, from (12), after some algebra, it can be computed that the posterior distribution π β’ ( π₯ 0 | π₯ π‘ ) is
π β’ ( π₯ 0 | π₯ π‘ )
π© β’ ( π₯ 0 ; π exact , π£ exact )
(35)
where
π exact
πΌ Β― π‘ β’ π 0 2 ( 1 β πΌ Β― π‘ ) + πΌ Β― π‘ β’ π 0 2 β’ π₯ π‘ , π£ exact
( 1 β πΌ Β― π‘ ) β’ π 0 2 ( 1 β πΌ Β― π‘ ) + πΌ Β― π‘ β’ π 0 2 .
(36)
Under the Assumption 1, i.e., π β’ ( π₯ 0 | π₯ π‘ ) β π β’ ( π₯ π‘ | π₯ 0 ) , we obtain an approximation of π β’ ( π₯ | π₯ π‘ ) as follows
π β’ ( π₯ 0 | π₯ π‘ ) β π ~ β’ ( π₯ 0 | π₯ π‘ )
π© β’ ( π₯ 0 ; π pseudo , π£ pseudo ) ,
(37)
where
π pseudo
1 πΌ Β― π‘ β’ π₯ π‘ , π£ pseudo
1 β πΌ Β― π‘ πΌ Β― .
(38)
By comparing the exact result (36) and approximation result (38), it can be easily seen that for a fixed π 0 2 > 0 , as πΌ Β― π‘ β 1 , we have π pseudo β π post and π£ pseudo β π£ post , which is exactly the case for DDPM as π‘ β 1 . To see this, we anneal πΌ Β― π‘ as πΌ Β― π‘
πΌ Β― max β’ ( πΌ Β― min πΌ Β― max ) π‘ β 1 π β 1 geometrically and compare π pseudo , π£ pseudo with π exact , π£ exact as π‘ increase from 1 to π . Assume that πΌ Β― min
0.01 and πΌ Β― min
0.99 , and π 0
25 , π₯ π‘
5 , π
500 , we obtain the results in Fig. 4. It can be seen in Fig. 4 that the approximated values π pseudo , π£ pseudo , especially the variance π£ pseudo , approach to the exact values π exact , π£ exact very quickly, verifying the effectiveness of the Assumption 1 under this toy example.
Appendix BEffect of Scaling Parameter π
As shown in both Algorithm 1 and Algorithm 2, a hyper-parameter π is introduced as a scaling value for the likelihood score. Empirically it is found that DMPS is robust to different choices of π around 1 though most of the time π
1 yields slightly better results. As one specific example, we show the results of DMPS for super-resolution for different values of π , as shown in Figure 5 (DDPM version) and Figure 6 (flow-based version). It can be seen that DMPS is robust to different choices of π , i.e., it works well in a wide range of values.
Figure 5:Results of DMPS (DDPM version) with different π for the task of noisy super-resolution ( Γ 4 ) with π
0.05 . Figure 6:Results of DMPS (flow-based version) with different π for the task of noisy super-resolution ( Γ 4 ) with π
0.05 . Appendix CResults on More Datasets
We provide more experimental results on AFHQ-cat and LSUN-bedroom for flow-based models are shown as follows:
super-resolution deblur colorization denoising
Method PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β
DMPS (DDPM, ours) 26.79 0.7653 0.2632 27.22 0.7571 0.2909 25.07 0.9190 0.3124 28.59 0.7994 0.2882 DPS (DDPM) 23.08 0.6127 0.3860 24.64 0.6625 0.3033 15.92 0.5976 0.6381 28.86 0.7828 0.2941 PGDM 25.44 0.7185 0.2837 26.69 0.7316 0.2896 16.74 0.6348 0.5335 27.06 0.7453 0.3236
Table 3:Results on FFHQ-Cat validation dataset using the same pre-trained DDPM model.
super-resolution deblur colorization denoising
Method PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β
DMPS (DDPM, ours) 25.63 0.7362 0.2281 28.21 0.8162 0.2113 23.19 0.9344 0.2117 29.81 0.8599 0.1884 DPS (DDPM) 22.83 0.6190 0.3275 24.97 0.6988 0.2593 11.38 0.5375 0.6606 30.75 0.8674 0.1841 PGDM 24.60 0.6854 0.2590 26.90 0.7721 0.2482 17.69 0.7335 0.3350 27.90 0.8153 0.2304
Table 4:Results on LSUN-Bedroom validation dataset using the same pre-trained DDPM model.
super-resolution deblur colorization denoising
Method PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β
DMPS (Flow-based, ours) 29.06 0.7905 0.2627 26.74 0.6942 0.3192 24.65 0.9140 0.2531 26.53 0.7870 0.3353 DPS (Flow-based) 27.61 0.7089 0.3190 23.26 0.5534 0.4122 21.64 0.8259 0.3833 26.10 0.6418 0.4049 OT-ODE 27.61 0.7081 0.3205 26.32 0.6592 0.3333 25.21 0.8692 0.3180 23.12 0.3647 0.5289
Table 5:Results on AFHQ-Cat validation dataset using the same pre-trained flow-based model.
super-resolution deblur colorization denoising
Method PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β PSNR β SSIM β LPIPS β
DMPS (Flow-based, ours) 24.36 0.6795 0.3837 23.19 0.5869 0.4384 23.37 0.8756 0.2838 22.68 0.6477 0.4458 DPS (Flow-based) 24.39 0.6430 0.3781 20.13 0.4318 0.4931 11.03 0.5283 0.7843 23.18 0.5457 0.4598 OT-ODE 23.88 0.6193 0.4001 22.69 0.5590 0.4264 23.62 0.7592 0.3923 18.17 0.2039 0.6405
Table 6:Results on LSUN-Bedroom validation dataset using the same pre-trained flow-based model. Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors
We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.
Xet Storage Details
- Size:
- 56.6 kB
- Xet hash:
- fd8a9041ffa8362322ea3b69bf02baf94368662bfa3408f2169fd09bb5a37059
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.