Papers
arxiv:2605.06376

Continuous-Time Distribution Matching for Few-Step Diffusion Distillation

Published on May 7
ยท Submitted by
liutao
on May 8
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

Continuous-Time Distribution Matching migrates diffusion model distillation from discrete to continuous optimization, enabling arbitrary points along sampling trajectories and preserving fine visual details through dynamic scheduling and velocity field extrapolation.

AI-generated summary

Step distillation has become a leading technique for accelerating diffusion models, among which Distribution Matching Distillation (DMD) and Consistency Distillation are two representative paradigms. While consistency methods enforce self-consistency along the full PF-ODE trajectory to steer it toward the clean data manifold, vanilla DMD relies on sparse supervision at a few predefined discrete timesteps. This restricted discrete-time formulation and mode-seeking nature of the reverse KL divergence tends to exhibit visual artifacts and over-smoothed outputs, often necessitating complex auxiliary modules -- such as GANs or reward models -- to restore visual fidelity. In this work, we introduce Continuous-Time Distribution Matching (CDM), migrating the DMD framework from discrete anchoring to continuous optimization for the first time. CDM achieves this through two continuous-time designs. First, we replace the fixed discrete schedule with a dynamic continuous schedule of random length, so that distribution matching is enforced at arbitrary points along sampling trajectories rather than only at a few fixed anchors. Second, we propose a continuous-time alignment objective that performs active off-trajectory matching on latents extrapolated via the student's velocity field, improving generalization and preserving fine visual details. Extensive experiments on different architectures, including SD3-Medium and Longcat-Image, demonstrate that CDM provides highly competitive visual fidelity for few-step image generation without relying on complex auxiliary objectives. Code is available at https://github.com/byliutao/cdm.

Community

Paper submitter

teaser

๐Ÿš€ CDM: High-Fidelity 4-Step Image Generation without GANs/Reward Models!

We are excited to present Continuous-Time Distribution Matching (CDM), a new paradigm for diffusion distillation!

๐Ÿ”ฅ Highlights:

State-of-the-Art Quality: Achieves top performance (Aesthetic, HPSv3, PickScore) at just 4 NFE.

No Auxiliary Objectives: Bypasses complex GAN or reward-model tuning required by previous DMD methods, avoiding artifacts and over-smoothing.

Rich Details: Preserves extremely sharp textures and fine-grained details in few-step generation.

Available Models: SD3-Medium and Longcat-Image models! Models & Code are fully open-sourced.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.06376
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.06376 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.06376 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.