File size: 47,788 Bytes
37b2733 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 | # Accelerating Diffusion Sampling via Exploiting Local Transition Coherence
Shangwen Zhu\*, Han Zhang\*, Zhantao Yang\*, Qianyu Peng\*, Zhao Pu Huangji Wang, Fan Cheng†
Shanghai Jiao Tong University, Shanghai, China
zhushangwen6,hzhang9617,ztyang196@gmail.com karry2020@alumni.sjtu.edu.cn
wanghuangji,pz-23dy@sjtu.edu.cn chengfan85@gmail.com
# Abstract
Text-based diffusion models have made significant breakthroughs in generating high-quality images and videos from textual descriptions. However, the lengthy sampling time of the denoising process remains a significant bottleneck in practical applications. Previous methods either ignore the statistical relationships between adjacent steps or rely on attention or feature similarity between them, which often only works with specific network structures. To address this issue, we discover a new statistical relationship in the transition operator between adjacent steps, focusing on the relationship of the outputs from the network. This relationship does not impose any requirements on the network structure. Based on this observation, we propose a novel training-free acceleration method called LTC-AccEL, which uses the identified relationship to estimate the current transition operator based on adjacent steps. Due to no specific assumptions regarding the network structure, LTC-AccEL is applicable to almost all diffusion-based methods and orthogonal to almost all existing acceleration techniques, making it easy to combine with them. Experimental results demonstrate that LTC-AccEL significantly speeds up sampling in text-to-image and text-to-video synthesis while maintaining competitive sample quality. Specifically, LTC-AccEL achieves a speedup of $1.67 \times$ in Stable Diffusion v2 and a speedup of $1.55 \times$ in video generation models. When combined with distillation models, LTC-AccEL achieves a remarkable $10 \times$ speedup in video generation, allowing real-time generation of more than 16FPS. Our code (include colab version) is available on Project Page.
# 1. Introduction
Recent advancements in text-based generation, particularly with diffusion models [6, 9, 27, 29, 30], have sig-

Figure 1. Comparison between LTC-ACCEL with DPM-Solver++ when implemented on Stable Diffusion v3.5 under the same number of steps and speedup framework. Results show that LTC-ACCEL significantly outperforms DPM-Solver++.
nificantly improved the generation of high-fidelity images [1, 7, 19, 43, 49, 57], audio [14, 17, 18, 24, 34, 45, 46], and video [4, 11, 16, 20, 28, 31, 52, 53] from textual descriptions, achieving remarkable advancements in visual fidelity and semantic alignment. By iteratively refining a noisy input until it converges to a sample that aligns with the given text prompt, these models capture intricate details and complex compositions previously unattainable with other approaches. Despite their impressive capabilities, a major drawback of diffusion models, particularly in video generation, is their high computational complexity during the denoising process, leading to prolonged inference times and substantial computational costs. For instance, generating a 5-second video at 8 frames per second (FPS) with a resolution of 720P using Wan2.1-14B [44] on a single H20 GPU takes approximately 6935 seconds, highlighting the significant resource demands of high-quality video synthesis. This limitation poses a considerable challenge for real-time applications and resource-constrained environments [1, 41].

(a) Combing LTC-AccEL with Deepcache (Cache).

(b) Combing with Distillation Model (Distill).
Figure 2. Qualitative results of LTC-AccEL integrated with existing training-free and training-based methods. Fig. 2a: Integration of LTC-AccEL with caching-based methods using DDIM on Stable Diffusion v2. Fig. 2b: Integration of LTC-AccEL with Animated-Diff-Lightning (distilled version of Animated-Diff). Results show that LTC-AccEL can be combined with previous methods well and achieve additional speedup without compromise on the quality of the generated images.
To accelerate diffusion models, various approaches have been proposed, including training-based and training-free strategies. Training-based methods enhance sampling efficiency by modifying the training process [12, 25, 38, 40, 42, 54, 59] or altering model architectures [15, 33, 39, 55], but they require additional computation and extended training. In contrast, training-free methods improve sampling efficiency without modifying the trained model by optimizing the denoising process [5, 10, 32, 37, 51] or introducing more efficient solvers [22, 23, 58]. Additionally, caching-based methods such as DeepCache [26, 47] exploit temporal redundancy in denoising steps to store and reuse intermediate features, thus achieving the reduction of redundant computations. However, these methods require a redesign of the caching strategy when the network architecture changes, limiting their flexibility across different models and configurations. Furthermore, they usually necessitate extra memory overhead to store intermediate representations, imposing constraints on resource-constrained deployment scenarios.
Unlike previous methods that rely on attention mechanisms or feature similarity within the network, we identify the phenomenon of Local Transition Coherence, which refers to the strong correlation between the transition operators $(\Delta x_{t + 1,t})$ of neighboring steps. Based on this insight, we propose LTC-AccEL, a novel training-free acceleration method that approximates the current step's transition operator using those from adjacent steps. As a result, it does not depend on any specific network architecture, making it broadly applicable to various diffusion models and compatible with both training-based and training-free acceleration methods.
We conducted extensive experiments demonstrating the effectiveness of our method and its compatibility with other approaches. LTC-Accel achieves a $1.67 \times$ speedup on Stable Diffusion v2 [35], and when combined with Deep-Cache [26], accelerates the process to $2.34 \times$ . It also integrates with Align Your Steps [37], achieving the equivalent of 10 steps of Align Your Steps in just 8 steps on Stable Diffusion v1.5 [35], with minimal impact on generation quality. Additionally, we achieve a $1.67 \times$ speedup on the Animated-Diff model [4], and by combining it with the distilled version, Animated-Diff-Lighting [20], we achieve a $10 \times$ speedup in video generation, enabling three-step generation. These results collectively illustrate that our proposed approach not only significantly reduces the computational load but also synergizes effectively with other optimization methods, facilitating highly efficient inference even under stringent resource constraints.
The core contributions of our work are:
- We have identified the phenomenon of Local Transition Coherence. Unlike previous approaches that rely on attention mechanisms or feature similarity within the network, this phenomenon reveals the inherent consistency of update trajectories, which is a broader consistency independent of any specific network architecture and pervasive throughout the diffusion sampling process.
- We propose LTC-AccEL, a training-free, highly generalizable method applicable to various diffusion models and orthogonal to both training-based and training-free acceleration methods, providing significant acceleration without sacrificing performance.
- We conducted extensive experiments to validate the ef

(a) Angle variation example.

(b) Acceleration error.
Figure 3. Variation of angle, error, and weight across sampling steps on Stable Diffusion v2 using DDIM with 20 steps and 20 unique prompt-latent pairs. (a) Angle variation per step. (b) Error between LTC-AccEL and the original process, with acceleration $(r = 2)$ applied over [12, 38], quantified by the 2-norm difference in latents. (c) $w_{g}$ variation, showing initial oscillations followed by rapid convergence.

(c) Convergence of $w_{g}$ .
fectiveness of our method and its compatibility with other approaches. LTC-AccEL significantly accelerates the generation process of models including Stable Diffusion, CogVideoX and Animated-Diff, and enhances performance when combined with existing acceleration methods.
# 2. Related Work
# 2.1. Training-Based Acceleration Methods
Traditional acceleration methods for diffusion models typically modify the model architecture or the sampling process during training for faster inference speed. Several typical approaches include mixed precision training and lightweight architectures [39, 55]. Recently, distillation [12, 25, 38, 40, 42, 54, 59] has gained popularity, focusing on reducing inference steps without significant quality loss. However, these methods require additional training, which can be both time-consuming and resource-intensive.
# 2.2. Training-Free Acceleration Methods
In certain scenarios where training resources are limited, training-free acceleration methods [5, 10, 32, 51] can outperform the training-based ones since they don't require any additional training, but are still capable of acceleration with little compromise to the performance. Thus, many successful training-free methods have been proposed and achieved promising effects. Optimized from the DDPM formulation, DDIM [41] is one of the typical attempts that has reduced the number of inference steps by introducing non-Markov process. The DPM-Solver [22, 23, 58] has also achieved significant reduction in the number of inference steps, while the implementation is focused on applying the specific high-order solver to solve diffusion ODEs [8, 56, 60]
equivalent to the denoising process. Besides, more recent approaches pay attention to preserving and reusing features from earlier steps, significantly reducing the computational load for subsequent steps. For example, DeepCache [26, 47] has performed exceptionally by reusing cacheable high-level features from consecutive steps and only updating low-level features, thus achieving nearly lossless acceleration. Moreover, optimizations on the sampling schedules have proved to be effective as well. Align Your Steps [37] has broadened its application scenarios by leveraging methods from stochastic calculus and applying optimal schedules specific to various solvers, pre-trained models and datasets.
# 3. LTC-Accel for Faster Diffusion Sampling
In this section, we first introduce the newly observed phenomenon called Local Transition Coherence in Sec. 3.1, which reveals the statistical correlation between the outputs of adjacent transition operations in the diffusion process. Next, in Sec. 3.2, we introduce our method, LTC-AccEL, which takes advantage of the transition operators of neighboring steps to approximate and replace the transition operator of the current step. Finally, in Sec. 3.3, we analyze the errors introduced by LTC-AccEL, which is negligible.
# 3.1. Local Transition Coherence
In this section, we introduce the newly observed phenomenon called Local Transition Coherence: During certain phases of the diffusion process, the transition operators of consecutive steps exhibit significant similarity. To quantify the difference between steps $t$ and $t + 1$ , we define the transition operator as $\Delta \mathbf{x}_{t+1,t} = \mathbf{x}_t - \mathbf{x}_{t+1}$ . Additionally, we define the angle between the transition operators at successive steps as follows:
Definition 3.1. The angle $\theta$ between $\Delta \mathbf{x}_{t + 1,t}$ and $\Delta \mathbf{x}_{t + 2,t + 1}$ is defined as
$$
\theta = \arccos \left(\frac {\Delta \mathbf {x} _ {t + 1 , t} \cdot \Delta \mathbf {x} _ {t + 2 , t + 1}}{\| \Delta \mathbf {x} _ {t + 1 , t} \| ^ {2} \| \Delta \mathbf {x} _ {t + 2 , t + 1} \| ^ {2}}\right). \tag {1}
$$
Note that when the angle $\theta$ approaches 0, the update trajectories of the two transition operators are nearly identical, and can be replaced by each other. As shown in Sec. 2, we observe that the angles are relatively small from steps 12 to 38, suggesting the update trajectories are highly similar during this period. This allows us to reduce unnecessary computations by approximating the current transition operator with that of the adjacent steps, thereby speeding up the sampling process significantly and effectively.
In practice, we define a threshold $\tau$ to identify the interval where $\theta$ remains relatively small, referred to as the acceleration interval and formally defined as follows:
Definition 3.2. The acceleration interval is defined as the range $[a, b]$ , where $a \geq 1$ , $b \leq T$ , and the value of $\theta$ for all $t \in [a, b]$ satisfies the following condition:
$$
\theta_ {t} < \tau , \quad \forall t \in [ a, b ]. \tag {2}
$$
During the acceleration interval, we approximate the update trajectories of the current step using those of previous steps, allowing us to skip the calculations for these steps, as shown in Sec. 2. Based on Local Transition Coherence, we propose a new training-free acceleration method called LTC-ACCEL, which improves the efficiency of the sampling process while maintaining the quality of the generated samples. It's important to note that Local Transition Coherence places no restrictions on the network's structure, meaning that LTC-ACCEL can be applied to nearly any diffusion model.
# 3.2. LTC-Accel: Transition Approximation
LTC-Accel reduces unnecessary computations by approximating the update trajectories of the current step using those from adjacent steps. In this section, we first introduce the formula for the approximated step. Next, we explain the conditions for this approximation and present the overall algorithm, followed by a description of the derivation.
Definition 3.3. With $\phi(t)$ denoting the denoising progress at step $t$ , the approximated step $x_{t}^{*}$ of step $t$ is defined as
$$
\mathbf {x} _ {t} ^ {*} = \mathbf {x} _ {t + 1} + w _ {g} \gamma \left(\Delta \mathbf {x} _ {t + 2, t + 1}\right), \tag {3}
$$
where $w_{g} = \frac{(\Delta\mathbf{x}_{t + 1,t})\cdot(\Delta\mathbf{x}_{t + 2,t + 1})}{\gamma\left\|(\Delta\mathbf{x}_{t + 2,t + 1})\right\|^{2}},\gamma = \frac{\phi(t) - \phi(t + 1)}{\phi(t + 1) - \phi(t + 2)}.$
Here, we would like to emphasize that although the calculation of $w_{g}$ relies on the target variable $\mathbf{x}_t$ , $w_{g}$ itself is a convergent quantity that depends solely on the step $t$ . To clearly identify the positions of the approximated steps, we define the acceleration condition:
Definition 3.4. The acceleration condition is defined as follows:
$$
t \bmod r = r - 1, \tag {4}
$$
where $t$ is the step number, and $r$ is a constant, mod stands for the modulo operation.
Based on these definitions, we propose LTC-ACCEL. During inference, following Eq. (2), we identify acceleration intervals. If the current step lies within an acceleration interval and satisfies the acceleration condition, we replace the original transition with the approximated step, as shown in Eq. (3). The detailed algorithm is presented in Algorithm 1.
# 3.2.1. Formalizing $\gamma$ and $w_{g}$
In this section, we explain how to select $\gamma$ and $w_{g}$ , which are crucial for implementing LTC-AccEL. The goal is to find appropriate values for $w_{g}$ and $\gamma$ that minimize the error of the approximated step, described as the difference between the approximated value $\mathbf{x}_t^*$ and the target value $\mathbf{x}_t$ as follows:
Definition 3.5. Parameter $w_{g}$ and $\gamma$ is defined as
$$
w _ {g}, \gamma = \underset {w _ {g}, \gamma} {\operatorname {a r g m i n}} \left(\left\| \Delta \mathbf {x} _ {t + 1, t} - w _ {g} \gamma \Delta \mathbf {x} _ {t + 2, t + 1} \right\| ^ {2}\right). \tag {5}
$$

Figure 4. Variation of PSNR values between original images and those generated by LTC-Accel as a function of bias, following the same experimental setup as Fig. 3.
Empirically determined value of $\gamma$ : $\gamma$ controls the relative weighting of the step intervals, effectively adjusting the influence of denoising progress on the update process. We denote the denoising progress at step $t$ as $\phi(t)$ and define it as $\phi(t) = \sqrt{\mathrm{SNR}_t}$ , where SNR represents the signal-to-noise ratio. This choice is motivated by the observation that as denoising progresses, the signal becomes increasingly dominant over noise, leading to a higher SNR. Taking
the square root of SNR provides a measure that scales more linearly with the improvement in signal quality, offering a practical representation of denoising progress. Consequently, we quantify the progress over the interval $[t,t + 1]$ as $\phi (t) - \phi (t + 1)$ , which serves as the foundation for defining $\gamma$ as follows:
$$
\gamma = \frac {\phi (t) - \phi (t + 1)}{\phi (t + 1) - \phi (t + 2)}. \tag {6}
$$
Optimal of $w_{g}$ : $w_{g}$ scales the magnitude of the transition operator. It is determined by minimizing the discrepancy between the actual transition and the approximated transition step, as given in Eq. (3). Here we provide the values of $w_{g}$ as follows (see supplementary for details):
$$
w _ {g} = \frac {\Delta \mathbf {x} _ {t + 1 , t} \cdot \Delta \mathbf {x} _ {t + 2 , t + 1}}{\gamma \left\| \Delta \mathbf {x} _ {t + 2 , t + 1} \right\| ^ {2}}. \tag {7}
$$
# 3.2.2. Convergence analysis of $w_{g}$
As shown in Eq. (5), the calculation of $w_{g}$ depends on $\mathbf{x}_{t}$ . However, during sampling, the value of $x_{t}$ is the target we aim to approximate, which poses a challenge to the approximation process. Fortunately, we observe that $w_{g}$ consistently exhibits convergence for across varying $\mathbf{x}_{t}$ , allowing us to determine the corresponding $w_{g}$ solely from $t$ . In this section, we first introduce the algorithm for evaluating $w_{g}$ , and then employ it to conduct a rigorous convergence analysis of $w_{g}$ .
Algorithm for Estimating $w_{g}$ : Computing $w_{g}$ is challenging because approximating $\mathbf{x}_t$ introduces cumulative errors that propagate through subsequent steps, affecting the accuracy of the sampling process, as illustrated in Sec. 2. To effectively mitigate this issue, we propose a two-step algorithm: 1) compute the current $w_{g}$ and use it to derive the approximated step $\mathbf{x}_t^*$ , which serves as the input for the next iteration; 2) perform a local search to optimize $w_{g}$ across the entire acceleration interval, minimizing cumulative errors (see Algorithm 2 for details).
Algorithm for Refining the Estimation of $w_{g}$ : Although the $w_{g}$ obtained by Algorithm 2 performs well, we introduce an optional refinement step to further improve the fidelity of accelerated images. Specifically, we use PSNR to evaluate and enhance the quality of accelerated images. This metric quantifies fidelity by measuring the similarity between an accelerated image and its non-accelerated counterpart. Typically, a PSNR above 30 dB signifies high-fidelity generation. The refinement algorithm consists of: 1) introducing a bias to adjust all $w_{g}$ values, and 2) conducting an end-to-end search within a predefined bias interval to determine the optimal adjustment (see Algorithm 3 for details).
Analysis of results: Similar to the convergence behavior observed in Sec. 2, the $w_{g}$ computed by Algorithm 2 consistently converges, ensuring the generality of our method
(see supplementary for more details). Additionally, Fig. 4 demonstrates that introducing a bias further enhances image similarity, increasing the PSNR from 37.5 to 39. The symmetry observed between bias and PSNR allows efficient optimization through binary search.
Algorithm 1 LTC-Accel Acceleration
1: Input: Diffusion model $p_{\theta}$ , acceleration interval $[a, b]$ , acceleration cycle $r$ , mapping $\phi$ , constant sequence $w_{g}$ , initial noise $\mathbf{x}_{N}$ .
2: Output: $x_{0}$ .
3: for $t = N$ to 0 do
4: if $t \in [a, b]$ and $t \mod r = r - 1$ then
5: $\gamma = \frac{\phi(t) - \phi(t + 1)}{\phi(t + 1) - \phi(t + 2)}$
6: $\mathbf{x}_t = \mathbf{x}_{t + 1} + w_g[t] \cdot \gamma \cdot \Delta \mathbf{x}_{t + 2, t + 1}$
7: else
8: $\mathbf{x}_t = p_{\theta}(\mathbf{x}_{t + 1}, t)$
9: end if
10: end for
11: return $\mathbf{x}_0$
Algorithm 2 Calculate $w_{g}$
1: Input: Diffusion model $p_{\theta}$ , acceleration interval $[a, b]$ , acceleration cycle $r$ , mapping $\phi$ , initial noise $\mathbf{x}_N$ .
2: Output: $w_g$ .
3: for $t = N$ to 0 do
4: if $t \in [a, b]$ and $t \mod r = r - 1$ then
5: $\gamma = \frac{\phi(t) - \phi(t + 1)}{\phi(t + 1) - \phi(t + 2)}$
6: $w_g[t] = \frac{\Delta x_{t + 1,t} \cdot \Delta x_{t + 2,t + 1}}{\gamma\|\Delta x_{t + 2,t + 1}\|^2}$
7: $\mathbf{x}_t^* = \mathbf{x}_{t + 1} + w_g\gamma (\Delta \mathbf{x}_{t + 2,t + 1})$
8: $\mathbf{x}_t = \mathbf{x}_t^*$
9: else
10: $\mathbf{x}_t = p_\theta (\mathbf{x}_{t + 1}, t)$
11: end if
12: end for
13: return $w_g$
Algorithm 3 (Optional) Improve $w_{g}$
1: Input: Diffusion model $p_{\theta}$ , free ride $p_{\theta}^{*}$ , bias interval [c,d], noise $x_0$ .
2: Output: Constant bias*.
3: $\mathbf{x}_0 = \prod_{t=N}^0 p_{\theta}(\mathbf{x}_{t+1}, t)$
4: bias* = argmax (PSNR(xN, $\prod_{t=N}^0 p_{\theta}^{*}(\mathbf{x}_{t+1}, t, w_g + bias)$ )
5: return bias*

(a)SD v2 (DDIM, ImageReward↑).

(b)SD v3.5 (DPMSolver++,ImageReward↑).

(c)SD v3.5 (EDM, ImageReward↑).

(d)SD v2 (DDIM,PickScore↑).

(e)SD v3.5 (DPMSolver++,PickScore↑).

(f)SD v3.5 (EDM,PickScore↑).
Figure 5. Quantitative results of text-to-image. We present our results on Stable Diffusion v2 and Stable Diffusion v3.5 by measuring the ImageReward and PickScore using 1000 prompt-image pairs. To demonstrate our compatibility with most schedulers, we use DDIM for sampling on Stable Diffusion v2, DPM-Solver++ and EDM for sampling on Stable Diffusion v3.5. The results demonstrate that our method achieves a $1.67 \times$ acceleration on Stable Diffusion v3.5, as well as a $1.67 \times$ acceleration on Stable Diffusion v2.
# 3.3. Error Analysis of the Approximated Step
In this section, we demonstrate that the error introduced by the approximation is generally negligible. We first provide the upper bound of the error of approximating a single step, followed by a comprehensive evaluation of the actual error incurred throughout the sampling process in practice.
Error of approximating a single step: Using the expressions for $w_{g}$ and $\gamma$ , we can estimate the relative error between the approximated step and the original step. When the angle between $\Delta \mathbf{x}_{t + 1,t}$ and $\Delta \mathbf{x}_{t + 2,t + 1}$ is less than a threshold $\tau$ , the relative error $\epsilon_r$ can be expressed as follows (see appendix for details):
$$
\epsilon_ {r} = \frac {\left\| \mathbf {x} _ {t} - \mathbf {x} _ {t} ^ {*} \right\| ^ {2}}{\left\| \Delta \mathbf {x} _ {t + 1 , t} \right\| ^ {2}} < \tau^ {2}. \tag {8}
$$
This implies that the relative error introduced by the approximate step is smaller than the update term of the original process, scaled by $\tau^2$ . When $\tau$ is typically on the order of magnitude of $[0.1, 0.2]$ , $\epsilon_r$ becomes negligible.
Error in practice: As the approximation of steps extends to $N$ steps during inference, the error accumulates, leading
to the results shown in the figure. The findings indicate that even when $32.5\%$ of the steps are approximated, the resulting error remains only $6.0\%$ , demonstrating that the error is nearly imperceptible. Notably, our method, LTC-AccEL, achieves a PSNR of 36.6, as illustrated in Fig. 4.
# 4. Experiments
In this section, we conduct extensive experiments to validate our method's effectiveness. First, in Sec. 4.1, we provide an overview of the experimental setup. Then, in Sec. 4.2 and Sec. 4.3, we present a comprehensive evaluation of text-to-image and text-to-video synthesis. In Sec. 4.4, we assess the integration of our method with other methods. Finally, we conduct ablation studies and provide detailed experimental settings in the Appendix.
# 4.1. Experimental Settings
In all experiments, unless otherwise specified, we use a classifier-free guidance [5] intensity of 7.5 without additional enhancement techniques. To evaluate acceleration performance, we perform inference in float16 and measure
<table><tr><td rowspan="2">Model</td><td colspan="3">DeepCache</td><td colspan="3">LTC-AccEL</td><td rowspan="2">Acceleration Condition</td><td rowspan="2">Speedup</td></tr><tr><td>Steps</td><td>Time(ms)</td><td>ImageReward↑</td><td>Steps</td><td>Time(ms)</td><td>ImageReward↑</td></tr><tr><td rowspan="4">SD v2</td><td>10</td><td>264</td><td>-0.2246</td><td>8</td><td>208</td><td>-0.2739</td><td>t mod r = r - 3, t > 4</td><td>1.25×</td></tr><tr><td>20</td><td>524</td><td>0.2445</td><td>16</td><td>419</td><td>0.2456</td><td>t mod r = r - 3, t > 8</td><td>1.25×</td></tr><tr><td>50</td><td>1411</td><td>0.4039</td><td>38</td><td>1038</td><td>0.4096</td><td>t mod r = r - 3, t > 12</td><td>1.41×</td></tr><tr><td>100</td><td>3014</td><td>0.4242</td><td>75</td><td>2171</td><td>0.4246</td><td>t mod r = r - 3, 24 < t ≤ 90</td><td>1.38×</td></tr></table>
Table 1. Quantitative evaluation of text-to-image generation via DDIM Sampling with LTC-Accel and DeepCache. In this experiment, the parameter $N$ of DeepCache is set to $N = 2$ , and the period parameter $r$ is set to $r = 3$ . The results are bold if they are better both in speed and quality. The results demonstrate that our method can be combined with DeepCache, achieving an additional $1.41 \times$ acceleration on top of DeepCache's speedup.
<table><tr><td rowspan="2">Model</td><td colspan="2">Ays</td><td colspan="2">LTC-AccEL</td><td rowspan="2">Speedup</td></tr><tr><td>Steps</td><td>ImageReward↑</td><td>Steps</td><td>ImageReward↑</td></tr><tr><td rowspan="3">SD v1.5</td><td>10</td><td>0.1332</td><td>8</td><td>0.1309</td><td>1.25×</td></tr><tr><td>20</td><td>0.1632</td><td>15</td><td>0.1510</td><td>1.33×</td></tr><tr><td>30</td><td>0.1901</td><td>20</td><td>0.2131</td><td>1.50×</td></tr></table>
Table 2. Quantitative evaluation of text-to-image generation via DPM-Solver++ sampling with LTC-Accel and Align Your Steps. The results demonstrate that our method can be integrated with Align Your Steps, achieving a $1.25 \times$ acceleration with minimal impact on the generation quality.
speedup based on the number of inference steps. Additionally, we set $\phi(t)$ , as mentioned in Sec. 3.2, to the $\sqrt{\mathrm{SNR}_t}$ values of the current step. All experiments are conducted without any additional training and we ensure that both the original model and LTC-AccEL start with consistent initial noise. In addition, for each prompt, we sample a unique starting noise to ensure variability.
# 4.2. Text-to-image Synthesis Task
In this section, we evaluate the performance of LTC-AccEL in text-to-image synthesis task.
Settings: We first introduce the configurations used for evaluation in this section: 1) Baselines: DDIM for Stable Diffusion v2 [36], EDM [8] and DPM-Solver++ [22] for Stable Diffusion v3.5-mid [3]. 2) Datasets: We employ the random 1,000 prompts from the MS-COCO 2017 dataset [21] for Stable Diffusion v2 trials, and Stable Diffusion v3.5-mid. 3) Metrics: In our experiments with Stable Diffusion v2 and v3.5-mid, we used the ImageReward metric [50] and PickScore [13]. ImageReward is a model trained on human comparisons to evaluate text-to-image synthesis, considering factors like alignment with text and aesthetic quality. PickScore, based on CLIP and trained on the Pick-a-Pic dataset, predicts user preferences for generated images.
Results: Fig. 5 presents the acceleration results from our experiments. LTC-AccEL demonstrates consistently strong performance across different text-to-image models and sampling methods. In particular, we obtain a $1.67 \times$ speedup on Stable Diffusion v2 and v3.5, which is notably high for a
training-free method.
# 4.3. Text-to-video Synthesis Task
In this section, we evaluate the performance of LTC-Accel on several video models in the text-to-video synthesis task. Settings: The configurations used for evaluation are as follows: 1) Baselines: We use Animated-Diff [4] and CogVideoX [53] 2B as baselines, with epiCRealism [2] as the base model for Animated-Diff. 2) Datasets: We randomly select 100 prompts from the MS-COCO 2017 dataset [21]. 3) Metrics: Following prior work [48], we evaluate video quality using Frame Consistency and Textual Faithfulness. Specifically, for Frame Consistency, we compute CLIP image embeddings for each frame and report the average cosine similarity across all frame pairs. For Textual Faithfulness, we compute the average ImageReward score between each frame and its corresponding text prompt.
Results: As shown in Tab. 3, LTC-AccEL obtains promising results across different video generation models. Notably, it can even achieve a $1.54 \times$ speedup with almost no impact on video generation quality.
# 4.4. Compatibility with Other Methods
Since LTC-AccEL leverages the relationship between the network's output, it introduces a completely new approach to acceleration. This makes it compatible with a wide variety of both training-based and training-free methods. In this section, we combine LTC-AccEL with various existing acceleration methods, both training-free and training-based,
<table><tr><td rowspan="2">Model</td><td colspan="3">Original</td><td colspan="3">LTC-AccEL</td><td rowspan="2">Speedup</td></tr><tr><td>Steps</td><td>Text↑</td><td>Smooth↑</td><td>Steps</td><td>Text↑</td><td>Smooth↑</td></tr><tr><td>Animated-Diff</td><td>10</td><td>0.2439</td><td>0.9713</td><td>7</td><td>0.2426</td><td>0.9700</td><td>1.43×</td></tr><tr><td>Animated-Diff</td><td>20</td><td>0.3050</td><td>0.9729</td><td>13</td><td>0.2939</td><td>0.9732</td><td>1.54×</td></tr><tr><td>Animated-Diff</td><td>30</td><td>0.3662</td><td>0.9676</td><td>19</td><td>0.3465</td><td>0.9681</td><td>1.58×</td></tr><tr><td>CogVideoX 2B</td><td>20</td><td>-0.1441</td><td>0.9442</td><td>14</td><td>-0.1673</td><td>0.9361</td><td>1.43×</td></tr><tr><td>CogVideoX 2B</td><td>30</td><td>0.2302</td><td>0.9464</td><td>19</td><td>0.2320</td><td>0.9435</td><td>1.58×</td></tr><tr><td>CogVideoX 2B</td><td>40</td><td>0.3918</td><td>0.9514</td><td>26</td><td>0.3775</td><td>0.9511</td><td>1.54×</td></tr><tr><td>CogVideoX 2B (int8)</td><td>40</td><td>0.2113</td><td>0.9456</td><td>26</td><td>0.2010</td><td>0.9449</td><td>1.54×</td></tr></table>
Table 3. Quantitative results of text-to-video, comparing Original and LTC-Accel settings. Text and Smooth denote Textual Faithfulness and Frame Consistency, respectively. The results are bold if they are better both in speed and quality. The results demonstrate that our method can be combined with video models well, achieving a $1.58 \times$ acceleration at most.
<table><tr><td rowspan="2">Model</td><td colspan="3">Original</td><td colspan="3">Distillation</td><td colspan="3">LTC-AccEL</td><td rowspan="2">Speedup</td></tr><tr><td>Steps</td><td>Text↑</td><td>Smooth↑</td><td>Steps</td><td>Text↑</td><td>Smooth↑</td><td>Steps</td><td>Text↑</td><td>Smooth↑</td></tr><tr><td>Animated-Diff-Lightning</td><td>4</td><td>0.3662</td><td>0.9685</td><td>3</td><td>0.2913</td><td>0.9673</td><td>3</td><td>0.3550</td><td>0.9645</td><td>1.33×</td></tr><tr><td>Animated-Diff-Lightning</td><td>8</td><td>0.3371</td><td>0.9697</td><td>5</td><td>0.2978</td><td>0.9690</td><td>5</td><td>0.3493</td><td>0.9654</td><td>1.60×</td></tr></table>
Table 4. Quantitative results of text-to-video, comparing Animated-Diff-Lightning (original steps), Animated-Diff-Lightning (same steps as LTC-AccEL), and LTC-AccEL. The results are bold if they are better both in speed and quality. The results demonstrate that our method can be integrated with Animated-Diff-Lightning, achieving an additional $1.60 \times$ acceleration with minimal impact on the generation quality.
for text-to-image and text-to-video tasks.
# 4.4.1. Combined with Training-Free Methods
In this part, our sampler configurations are meticulously set up as follows: 1) Baselines: We select DeepCache, Align Your Steps, and INT8 quantization as representatives of training-free methods and conduct experiments on Stable Diffusion v2. Specifically, DeepCache is a novel training-free method that accelerates diffusion models by leveraging temporal redundancy in the denoising process to cache and reuse features, while Align Your Steps is a principled method for optimizing sampling schedules in diffusion models, particularly efficient in few-step synthesis. INT8 quantization converts high-precision model parameters to 8-bit integers, accelerating video processing and inference while maintaining acceptable quality. 2) Datasets: The random 1,000 prompts from the MS-COCO 2017. 3) Metrics: ImageReward introduced in Sec. 4.2.
Results: Tab. 1 shows combining our method with Deep-Cache yields a $2.3 \times$ speedup and boosts ImageReward. Tab. 2 shows 10 Ays steps achieve 0.1332, while our method reaches 0.1309 with only 8 steps ( $1.25 \times$ faster). At 20 Ays steps ImageReward is 0.1632, our 15-step variant scores 0.1510 ( $1.33 \times$ faster). At 30 Ays steps ImageReward is 0.1901, while our 20-step variant surpasses it with 0.2131 ( $1.50 \times$ speedup). Finally, INT8 quantization on CogVideoX remains effective with our scheduler (Tab. 3).
# 4.4.2. Combined with Training-Based Methods
In particular, considering the setup of the distilled model, we do not use classifier-free guidance in this section. Our sampler configurations are meticulously set up as follows: 1) Baselines: We use Animated-Diff-Lightning as a representative of distilled models. Animated-Diff-Lighting [20] is a distilled version of the Animated-Diff model, designed to retain core functionality while optimizing performance. Specifically, we use the 8-step and 4-step weights of Animated-Diff-Lightning, where the 8-step weights correspond to 8-step inference and the 4-step weights correspond to 4-step inference. 2) Datasets: The random 100 prompts from the MS-COCO 2017 dataset [21]. 3) Metrics: Frame Consistency and Textual Faithfulness introduced in Sec. 4.3.
Results: The results in Tab. 4 show that our model can effectively accelerate distilled models, and even speed up the 4-step model to 3 steps with minimal impact on video generation quality, when compared to the original process with the same computational cost.
# 5. Limitations
Our method has two main limitations. First, its effectiveness relies on the Local Transition Coherence phenomenon, which weakens with very few sampling steps (under three). Second, while training-free, it mainly requires tuning optimal intervals, as other hyperparameters are straightforward.
# 6. Acknowledgement
This work received strong support from all co-authors. Fan Cheng provided overall guidance and supplied the computational resources. Shangwen Zhu was deeply involved throughout the entire project. Han Zhang contributed powerful theoretical insights. Zhantao Yang offered profound advice on manuscript writing and experimental design. Qianyu Peng helped design the experimental coding pipeline. Zhao Pu and Huangji Wang also proposed valuable suggestions for the paper. Also this work was supported in part by the National Key R&D Program of China under Grant 2022YFA1005000, in part by the NSFCunder Grant 61701304.
# References
[1] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. In NeurIPS, 2021. 1
[2] Epinikion. epicrealism. https://civitai.com/models/25694. Accessed: 2023.7
[3] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, Dustin Podell, Tim Dockhorn, Zion English, Kyle Lacey, Alex Goodwin, Yannik Marek, and Robin Rombach. Scaling rectified flow transformers for high-resolution image synthesis, 2024. 7
[4] Yuwei Guo, Ceyuan Yang, Anyi Rao, Zhengyang Liang, Yaohui Wang, Yu Qiao, Maneesh Agrawala, Dahua Lin, and Bo Dai. Animatediff: Imagine your personalized text-to-image diffusion models without specific tuning. arXiv preprint arXiv:2307.04725, 2023. 1, 2, 7
[5] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 2, 3, 6
[6] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. NeurIPS, 2020. 1
[7] Peng Huang, Xue Gao, Lihong Huang, Jing Jiao, Xiaokang Li, Yuanyuan Wang, and Yi Guo. Chest-diffusion: A light-weight text-to-image model for report-to-cxr generation. IEEE, 2024. 1
[8] Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. NeurIPS, 2022. 3, 7
[9] Tero Karras, Miika Aittala, Jaakko Lehtinen, Janne Hellsten, Timo Aila, and Samuli Laine. Analyzing and improving the training dynamics of diffusion models. In CVPR, 2024. 1
[10] Tero Karras, Miika Aittala, Tuomas Kynkänniemi, Jaakko Lehtinen, Timo Aila, and Samuli Laine. Guiding a diffusion model with a bad version of itself. NeurIPS, 2025. 2, 3
[11] Bosung Kim, Kyuhwan Lee, Isu Jeong, Jungmin Cheon, Yeojin Lee, and Seulki Lee. On-device sora: Enabling diffusion-based text-to-video generation for mobile devices. arXiv preprint arXiv:2502.04363, 2025. 1
[12] Dongjun Kim, Chieh-Hsin Lai, Wei-Hsiang Liao, Naoki Murata, Yuhta Takida, Toshimitsu Uesaka, Yutong He, Yuki Mitsufuji, and Stefano Ermon. Consistency trajectory models:
Learning probability flow ode trajectory of diffusion. arXiv preprint arXiv:2310.02279, 2023. 2, 3
[13] Yuval Kirstain, Adam Polyak, Uriel Singer, Shahbuland Matiana, Joe Penna, and Omer Levy. Pick-a-pic: An open dataset of user preferences for text-to-image generation. In NeurIPS, 2023. 7
[14] Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. Diffwave: A versatile diffusion model for audio synthesis, 2021. 1
[15] Raghuraman Krishnamoorthi. Quantizing deep convolutional networks for efficient inference: A whitepaper. arXiv preprint arXiv:1806.08342, 2018. 2
[16] Taegyeong Lee, Soyeong Kwon, and Taehwan Kim. Grid diffusion models for text-to-video generation. IEEE, 2024. 1
[17] Jean-Marie Lemercier, Julius Richter, Simon Welker, and Timo Gerkmann. Storm: A diffusion-based stochastic regeneration model for speech enhancement and dereverberation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023. 1
[18] Chenda Li, Samuele Cornell, Shinji Watanabe, and Yanmin Qian. Diffusion-based generative modeling with discriminative guidance for streamable speech enhancement. In 2024 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2024. 1
[19] Youwei Liang, Junfeng He, Gang Li, Peizhao Li, Arseniy Klimovskiy, Nicholas Carolan, Jiao Sun, Jordi Pont-Tuset, Sarah Young, Feng Yang, et al. Rich human feedback for text-to-image generation. In CVPR, 2024. 1
[20] Shanchuan Lin and Xiao Yang. Animatediff-lightning: Cross-model diffusion distillation. arXiv preprint arXiv:2403.12706, 2024. 1, 2, 8
[21] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014. 7, 8
[22] Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models. arXiv preprint arXiv:2211.01095, 2022. 2, 3, 7
[23] Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. NeurIPS, 2022. 2, 3
[24] Yen-Ju Lu, Zhong-Qiu Wang, Shinji Watanabe, Alexander Richard, Cheng Yu, and Yu Tsao. Conditional diffusion probabilistic model for speech enhancement. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022. 1
[25] Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. Latent consistency models: Synthesizing high-resolution images with few-step inference. arXiv preprint arXiv:2310.04378, 2023. 2, 3
[26] Xinyin Ma, Gongfan Fang, and Xinchao Wang. Deepcache: Accelerating diffusion models for free. IEEE, 2023. 2, 3
[27] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In ICML, 2021. 1
[28] Gyeongrok Oh, Jaehwan Jeong, Sieun Kim, Wonmin Byeon, Jinkyu Kim, Sungwoong Kim, and Sangpil Kim. Mevg: Multi-event video generation withtext-to-video models. In ECCV, 2025.1
[29] William Peebles and Saining Xie. Scalable diffusion models with transformers. In ICCV, 2023. 1
[30] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023. 1
[31] Fan Qi, Yu Duan, Huaiwen Zhang, and Changsheng Xu. Signgen: End-to-end sign language video generation with latent diffusion. In ECCV, 2025. 1
[32] Yurui Qian, Qi Cai, Yingwei Pan, Yehao Li, Ting Yao, Qibin Sun, and Tao Mei. Boosting diffusion models with moving average sampling in frequency domain. In CVPR, 2024. 2, 3
[33] Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, and Piotr Dollar. Designing network design spaces. In CVPR, 2020. 2
[34] Julius Richter, Simon Welker, Jean-Marie Lemercier, Bunlong Lay, and Timo Gerkmann. Speech enhancement and dereverberation with diffusion-based generative models. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023. 1
[35] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, 2022. 2
[36] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, 2022. 7
[37] Amirmojtaba Sabour, Sanja Fidler, and Karsten Kreis. Align your steps: Optimizing sampling schedules in diffusion models. arXiv preprint arXiv:2404.14507, 2024. 2, 3
[38] Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. arXiv preprint arXiv:2202.00512, 2022. 2, 3
[39] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zh-moginov, and Liang Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. IEEE, 2018. 2, 3
[40] Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. Adversarial diffusion distillation. In ECCV, 2024. 2, 3
[41] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020. 1, 3
[42] Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. In ICML, 2023. 2, 3
[43] Ahmad Suleyman and Göksel Biricik. Grounding text-to-image diffusion models for controlled high-quality image generation. arXiv preprint arXiv:2501.09194, 2025. 1
[44] Wan Team. Wan: Open and advanced large-scale video generative models. 2025. 1
[45] Thanapat Trachu, Chawan Piansaddhayanon, and Ekapol Chuangsuwanich. Thunder: Unified regression-diffusion speech enhancement with a single reverse step using brownian bridge. arXiv preprint arXiv:2406.06139, 2024. 1
[46] Simon Welker, Julius Richter, and Timo Gerkmann. Speech enhancement with score-based generative models in the complex stft domain. arXiv preprint arXiv:2203.17004, 2022. 1
[47] Felix Wimbauer, Bichen Wu, Edgar Schoenfeld, Xiaoliang Dai, Ji Hou, Zijian He, Artsiom Sanakoyeu, Peizhao Zhang, Sam Tsai, Jonas Kohler, et al. Cache me if you can: Accelerating diffusion models through block caching. In CVPR, 2024. 2, 3
[48] Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Stan Weixian Lei, Yuchao Gu, Yufei Shi, Wynne Hsu, Ying Shan, Xiaohu Qie, and Mike Zheng Shou. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. In ICCV, 2023. 7
[49] Changming Xiao, Qi Yang, Feng Zhou, and Changshui Zhang. From text to mask: Localizing entities using the attention of text-to-image diffusion models. Neurocomputing, 2024. 1
[50] Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. Imagereward: Learning and evaluating human preferences for text-to-image generation. NeurlPS, 2023. 7
[51] Shuchen Xue, Zhaoqiang Liu, Fei Chen, Shifeng Zhang, Tianyang Hu, Enze Xie, and Zhenguo Li. Accelerating diffusion sampling with optimized time steps. In CVPR, 2024. 2, 3
[52] Ruihan Yang, Prakhar Srivastava, and Stephan Mandt. Diffusion probabilistic modeling for video generation. Entropy; International and Interdisciplinary Journal of Entropy and Information Studies, 2023. 1
[53] Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint arXiv:2408.06072, 2024. 1, 7
[54] Tianwei Yin, Michael Gharbi, Richard Zhang, Eli Shechtman, Fredo Durand, William T Freeman, and Taesung Park. One-step diffusion with distribution matching distillation. In CVPR, 2024. 2, 3
[55] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In CVPR, 2018. 2, 3
[56] Wenliang Zhao, Lujia Bai, Yongming Rao, Jie Zhou, and Jiwen Lu. Unipc: A unified predictor-corrector framework for fast sampling of diffusion models. NeurIPS, 2023. 3
[57] Yang Zhao, Yanwu Xu, Zhisheng Xiao, Haolin Jia, and Tingbo Hou. Mobilediffusion: Instant text-to-image generation on mobile devices. In ECCV. Springer, 2024. 1
[58] Kaiwen Zheng, Cheng Lu, Jianfei Chen, and Jun Zhu. Dpm solver-v3: Improved diffusion ode solver with empirical model statistics. NeurIPS, 2023. 2, 3
[59] Mingyuan Zhou, Huangjie Zheng, Zhendong Wang, Mingzhang Yin, and Hai Huang. Score identity distillation: Exponentially fast distillation of pretrained diffusion models for one-step generation. In ICML, 2024. 2, 3
[60] Zhenyu Zhou, Defang Chen, Can Wang, and Chun Chen. Fast ode-based sampling for diffusion models in around 5 steps. In CVPR, 2024. 3 |