Add library_name metadata
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,6 +1,7 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
pipeline_tag: image-to-video
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
<h1 align="center">Diffusion Forcing Transformer</h1>
|
|
@@ -23,7 +24,7 @@ pipeline_tag: image-to-video
|
|
| 23 |
<h3 align="center"><a href="https://arxiv.org/abs/2502.06764">Paper</a> | <a href="https://boyuan.space/history-guidance">Website</a> | <a href="https://huggingface.co/spaces/kiwhansong/diffusion-forcing-transformer">HuggingFace Demo</a> | <a href="https://github.com/kwsong0113/diffusion-forcing-transformer">GitHub Code</a></h3>
|
| 24 |
</p>
|
| 25 |
|
| 26 |
-
This is the official model hub for the paper [**_History-guided Video Diffusion_**](https://arxiv.org/abs/2502.06764). We introduce the **Diffusion Forcing Tranformer (DFoT)**, a novel video diffusion model that designed to generate videos conditioned on an arbitrary number of context frames.
|
| 27 |
|
| 28 |
|
| 29 |

|
|
@@ -44,7 +45,6 @@ Please check it out and have fun generating videos with DFoT!
|
|
| 44 |
All pretrained models can be automatically loaded from [our GitHub codebase](https://github.com/kwsong0113/diffusion-forcing-transformer). Please visit our repository for further instructions!
|
| 45 |
|
| 46 |
|
| 47 |
-
|
| 48 |
## 📌 Citation
|
| 49 |
|
| 50 |
If our work is useful for your research, please consider citing our paper:
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
pipeline_tag: image-to-video
|
| 4 |
+
library_name: diffusers
|
| 5 |
---
|
| 6 |
|
| 7 |
<h1 align="center">Diffusion Forcing Transformer</h1>
|
|
|
|
| 24 |
<h3 align="center"><a href="https://arxiv.org/abs/2502.06764">Paper</a> | <a href="https://boyuan.space/history-guidance">Website</a> | <a href="https://huggingface.co/spaces/kiwhansong/diffusion-forcing-transformer">HuggingFace Demo</a> | <a href="https://github.com/kwsong0113/diffusion-forcing-transformer">GitHub Code</a></h3>
|
| 25 |
</p>
|
| 26 |
|
| 27 |
+
This is the official model hub for the paper [**_History-guided Video Diffusion_**](https://arxiv.org/abs/2502.06764). We introduce the **Diffusion Forcing Tranformer (DFoT)**, a novel video diffusion model that designed to generate videos conditioned on an arbitrary number of context frames. Additionally, we present **History Guidance (HG)**, a family of guidance methods uniquely enabled by DFoT. These methods significantly enhance video generation quality, temporal consistency, and motion dynamics, while also unlocking new capabilities such as compositional video generation and the stable rollout of extremely long videos.
|
| 28 |
|
| 29 |
|
| 30 |

|
|
|
|
| 45 |
All pretrained models can be automatically loaded from [our GitHub codebase](https://github.com/kwsong0113/diffusion-forcing-transformer). Please visit our repository for further instructions!
|
| 46 |
|
| 47 |
|
|
|
|
| 48 |
## 📌 Citation
|
| 49 |
|
| 50 |
If our work is useful for your research, please consider citing our paper:
|