Improve model card for Transition Models (TiM)
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1 +1,58 @@
|
|
| 1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
pipeline_tag: text-to-image
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# Transition Models: Rethinking the Generative Learning Objective
|
| 7 |
+
|
| 8 |
+
This repository contains the official implementation of **Transition Models (TiM)**, a novel generative model presented in the paper "[Transition Models: Rethinking the Generative Learning Objective](https://huggingface.co/papers/2509.04394)".
|
| 9 |
+
|
| 10 |
+
TiM addresses the dilemma in generative modeling by introducing an exact, continuous-time dynamics equation that analytically defines state transitions across any finite time interval. This enables a novel generative paradigm that adapts to arbitrary-step transitions, seamlessly traversing the generative trajectory from single leaps to fine-grained refinement with more steps.
|
| 11 |
+
|
| 12 |
+
For more detailed information, code, and usage instructions, please refer to the official [GitHub repository](https://github.com/WZDTHU/TiM).
|
| 13 |
+
|
| 14 |
+
## Highlights
|
| 15 |
+
|
| 16 |
+
* **Arbitrary-Step Generation**: TiM learns to master arbitrary state-to-state transitions, unifying few-step and many-step regimes within a single, powerful model. This approach allows it to learn the entire solution manifold of the generative process.
|
| 17 |
+
* **State-of-the-Art Performance**: Despite having only 865M parameters, TiM achieves state-of-the-art performance, surpassing leading models such as SD3.5 (8B parameters) and FLUX.1 (12B parameters) across all evaluated step counts on the GenEval benchmark.
|
| 18 |
+
* **Monotonic Quality Improvement**: Unlike previous few-step generators, TiM demonstrates consistent quality improvement as the sampling budget increases.
|
| 19 |
+
* **High-Resolution Fidelity**: When employing its native-resolution strategy, TiM delivers exceptional fidelity at resolutions up to 4096x4096.
|
| 20 |
+
|
| 21 |
+
<p align="center">
|
| 22 |
+
<img src="https://github.com/WZDTHU/TiM/raw/main/assets/illustration.png" width="800" alt="TiM Illustration">
|
| 23 |
+
</p>
|
| 24 |
+
|
| 25 |
+
## Model Zoo
|
| 26 |
+
|
| 27 |
+
A single TiM model can perform any-step generation (one-step, few-step, and multi-step) and demonstrate monotonic quality improvement as the sampling budget increases.
|
| 28 |
+
|
| 29 |
+
### Text-to-Image Generation
|
| 30 |
+
|
| 31 |
+
| Model | Model Size | VAE | 1-NFE GenEval | 8-NFE GenEval | 128-NFE GenEval |
|
| 32 |
+
|---------|------------|------------------------------------------------------------------------|---------------|---------------|-----------------|
|
| 33 |
+
| TiM-T2I | 865M | [DC-AE](https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.1-diffusers) | 0.67 | 0.76 | 0.83 |
|
| 34 |
+
|
| 35 |
+
### Class-guided Image Generation
|
| 36 |
+
|
| 37 |
+
| Model | Model Size | VAE | 2-NFE FID | 500-NFE FID |
|
| 38 |
+
|-----------|------------|------------------------------------------------------------------------|-----------|-------------|
|
| 39 |
+
| TiM-C2I-256 | 664M | [SD-VAE](https://huggingface.co/stabilityai/sd-vae-ft-ema) | 6.14 | 1.65 |
|
| 40 |
+
| TiM-C2I-512 | 664M | [DC-AE](https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.1-diffusers) | 4.79 | 1.69 |
|
| 41 |
+
|
| 42 |
+
## Citation
|
| 43 |
+
|
| 44 |
+
If you find this project useful, please kindly cite:
|
| 45 |
+
|
| 46 |
+
```bibtex
|
| 47 |
+
@article{wang2025transition,
|
| 48 |
+
title={Transition Models: Rethinking the Generative Learning Objective},
|
| 49 |
+
author={Wang, Zidong and Zhang, Yiyuan and Yue, Xiaoyu and Yue, Xiangyu and Li, Yangguang and Ouyang, Wanli and Bai, Lei},
|
| 50 |
+
year={2025},
|
| 51 |
+
eprint={2509.04394},
|
| 52 |
+
archivePrefix={arXiv},
|
| 53 |
+
primaryClass={cs.LG}
|
| 54 |
+
}
|
| 55 |
+
```
|
| 56 |
+
|
| 57 |
+
## License
|
| 58 |
+
This project is licensed under the Apache-2.0 license.
|