Accelerating Diffusion Model Training under Minimal Budgets: A Condensation-Based Perspective
Abstract
A novel framework called Diffusion Dataset Condensation (D2C) is presented that significantly reduces the computational requirements for training diffusion models by creating compact, informative subsets from large datasets through a two-phase process involving selection and attachment of enhanced representations.
Diffusion models have achieved remarkable performance on a wide range of generative tasks, yet training them from scratch is notoriously resource-intensive, typically requiring millions of training images and many GPU days. Motivated by a data-centric view of this bottleneck, we adopt a condensation-based perspective: given a large training set, the goal is to construct a much smaller condensed dataset that still supports training strong diffusion models under minimal data and compute budgets. To operationalize this perspective, we introduce Diffusion Dataset Condensation (D2C), a two-phase framework comprising Select and Attach. In the Select phase, a diffusion difficulty score combined with interval sampling is used to identify a compact, informative training subset from the original data. Building on this subset, the Attach phase further strengthens the conditional signals by augmenting each selected image with rich semantic and visual representations. To our knowledge, D2C is the first framework that systematically investigates dataset condensation for diffusion models, whereas prior condensation methods have mainly targeted discriminative architectures. Extensive experiments across data budgets (0.8%-8% of ImageNet), model architectures, and image resolutions demonstrate that D2C dramatically accelerates diffusion model training while preserving high generative quality. On ImageNet 256x256 with SiT-XL/2, D2C attains an FID of 4.3 in just 40k steps using only 0.8% of the training images, corresponding to about 233x and 100x faster training than vanilla SiT-XL/2 and SiT-XL/2 + REPA, respectively.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- IMS3: Breaking Distributional Aggregation in Diffusion-Based Dataset Distillation (2026)
- Coevolving Representations in Joint Image-Feature Diffusion (2026)
- Accelerating Diffusion Decoders via Multi-Scale Sampling and One-Step Distillation (2026)
- DMGD: Train-Free Dataset Distillation with Semantic-Distribution Matching in Diffusion Models (2026)
- Learnability-Guided Diffusion for Dataset Distillation (2026)
- DiffSparse: Accelerating Diffusion Transformers with Learned Token Sparsity (2026)
- Diffusion Model as a Generalist Segmentation Learner (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2507.05914 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper