File size: 1,201 Bytes
d9c1802 dcea773 4fef3cd | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | ---
license: apache-2.0
datasets:
- gaunernst/ffhq-1024-wds
---
# MADFormer-FFHQ
This repository provides checkpoints for MADFormer trained on **FFHQ-1024**, combining autoregressive global conditioning and diffusion-based local refinement for high-resolution image synthesis.
---
## 📄 Paper
[MADFormer: Mixed Autoregressive & Diffusion Transformers for Continuous Image Generation](https://arxiv.org/abs/2506.07999)
---
## 📦 Checkpoints
- Trained for **210k steps** on FFHQ-1024
- Download checkpoint: `ckpts.pt`
---
## 🧪 How to Use
```python
# TODO
```
> 💡 MADFormer supports flexible AR↔Diff trade-offs. On FFHQ-1024, increasing AR layer allocation yields up to **75% FID improvements** under low NFE settings.
---
## 📚 Citation
If you find our work useful, please cite:
```bibtex
@misc{chen2025madformermixedautoregressivediffusion,
title={MADFormer: Mixed Autoregressive and Diffusion Transformers for Continuous Image Generation},
author={Junhao Chen and Yulia Tsvetkov and Xiaochuang Han},
year={2025},
eprint={2506.07999},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.07999},
}
```
|