File size: 1,830 Bytes
d2ecadf 43e4cb5 d2ecadf 43e4cb5 d2ecadf 43e4cb5 d2ecadf 43e4cb5 d2ecadf 43e4cb5 d2ecadf 43e4cb5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
---
license: apache-2.0
pipeline_tag: unconditional-image-generation
tags:
- image-generation
- diffusion-transformer
---
# IG: Guiding a Diffusion Transformer with the Internal Dynamics of Itself
This repository contains the official PyTorch checkpoints for **Internal Guidance (IG)**, as presented in the paper [Guiding a Diffusion Transformer with the Internal Dynamics of Itself](https://huggingface.co/papers/2512.24176).
Internal Guidance (IG) is a simple yet effective strategy that introduces auxiliary supervision on the intermediate layers during the training process. During sampling, it extrapolates the outputs of intermediate and deep layers to achieve superior generative results. This approach yields significant improvements in both training efficiency and image quality across various baselines like SiT and LightningDiT.
- **Paper:** [https://huggingface.co/papers/2512.24176](https://huggingface.co/papers/2512.24176)
- **Project Page:** [https://zhouxingyu13.github.io/Internal-Guidance/](https://zhouxingyu13.github.io/Internal-Guidance/)
- **Code:** [https://github.com/CVL-UESTC/Internal-Guidance](https://github.com/CVL-UESTC/Internal-Guidance)
## Results
On ImageNet 256x256, IG-guided models achieve state-of-the-art performance:
- **SiT-XL/2 + IG**: FID = 1.75 at 800 epochs.
- **LightningDiT-XL/1 + IG**: FID = 1.34 (random sampling).
- **LightningDiT-XL/1 + IG + CFG**: FID = 1.19 (random sampling) and **1.07** (uniform balanced sampling).
## Citation
If you find this work helpful or inspiring, please feel free to cite it:
```bibtex
@article{zhou2025guiding,
title={Guiding a Diffusion Transformer with the Internal Dynamics of Itself},
author={Zhou, Xingyu and Li, Qifan and Hu, Xiaobin and Chen, Hai and Gu, Shuhang},
journal={arXiv preprint arXiv:2512.24176},
year={2025}
}
``` |