Text-to-Image
PyTorch
wangsssssss commited on
Commit
88546b6
·
verified ·
1 Parent(s): 2eb5a07

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md CHANGED
@@ -1 +1,63 @@
1
  # DDT: Decoupled Diffusion Transformer
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # DDT: Decoupled Diffusion Transformer
2
+ # DDT: Decoupled Diffusion Transformer
3
+ <div style="text-align: center;">
4
+ <a href="https://arxiv.org/abs/2504.05741"><img src="https://img.shields.io/badge/arXiv-2504.05741-b31b1b.svg" alt="arXiv"></a>
5
+ <a href="https://huggingface.co/papers/2504.05741"><img src="https://huggingface.co/datasets/huggingface/badges/resolve/main/paper-page-sm.svg" alt="Paper page"></a>
6
+ </div>
7
+
8
+ <div style="text-align: center;">
9
+ <a href="https://paperswithcode.com/sota/image-generation-on-imagenet-256x256?p=ddt-decoupled-diffusion-transformer"><img src="https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/ddt-decoupled-diffusion-transformer/image-generation-on-imagenet-256x256" alt="PWC"></a>
10
+
11
+ <a href="https://paperswithcode.com/sota/image-generation-on-imagenet-512x512?p=ddt-decoupled-diffusion-transformer"><img src="https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/ddt-decoupled-diffusion-transformer/image-generation-on-imagenet-512x512" alt="PWC"></a>
12
+ </div>
13
+
14
+ ## Introduction
15
+ We decouple diffusion transformer into encoder-decoder design, and surpresingly that a **more substantial encoder yields performance improvements as model size increases**.
16
+ ![](./figs/main.png)
17
+ * We achieves **1.26 FID** on ImageNet256x256 Benchmark with DDT-XL/2(22en6de).
18
+ * We achieves **1.28 FID** on ImageNet512x512 Benchmark with DDT-XL/2(22en6de).
19
+ * As a byproduct, our DDT can reuse encoder among adjacent steps to accelerate inference.
20
+ ## Visualizations
21
+ ![](./figs/teaser.png)
22
+ ## Checkpoints
23
+ We take the off-shelf [VAE](https://huggingface.co/stabilityai/sd-vae-ft-ema) to encode image into latent space, and train the decoder with DDT.
24
+
25
+ | Dataset | Model | Params | FID | HuggingFace |
26
+ |-------------|-------------------|-----------|------|----------------------------------------------------------|
27
+ | ImageNet256 | DDT-XL/2(22en6de) | 675M | 1.26 | [🤗](https://huggingface.co/MCG-NJU/DDT-XL-22en6de-R256) |
28
+ | ImageNet512 | DDT-XL/2(22en6de) | 675M | 1.28 | [🤗](https://huggingface.co/MCG-NJU/DDT-XL-22en6de-R512) |
29
+ ## Online Demos
30
+ Coming soon.
31
+
32
+ ## Usages
33
+ We use ADM evaluation suite to report FID.
34
+ ```bash
35
+ # for installation
36
+ pip install -r requirements.txt
37
+ ```
38
+ ```bash
39
+ # for inference
40
+ python main.py predict -c configs/repa_improved_ddt_xlen22de6_256.yaml --ckpt_path=XXX.ckpt
41
+ ```
42
+
43
+ ```bash
44
+ # for training
45
+ # extract image latent (optional)
46
+ python3 tools/cache_imlatent4.py
47
+ # train
48
+ python main.py fit -c configs/repa_improved_ddt_xlen22de6_256.yaml
49
+ ```
50
+
51
+
52
+ ## Reference
53
+ ```bibtex
54
+ @ARTICLE{ddt,
55
+ title = "DDT: Decoupled Diffusion Transformer",
56
+ author = "Wang, Shuai and Tian, Zhi and Huang, Weilin and Wang, Limin",
57
+ month = apr,
58
+ year = 2025,
59
+ archivePrefix = "arXiv",
60
+ primaryClass = "cs.CV",
61
+ eprint = "2504.05741"
62
+ }
63
+ ```