nielsr HF Staff commited on
Commit
dd7c8bf
·
verified ·
1 Parent(s): b430f0e

Add library_name metadata

Browse files

This PR improves the model card by adding the `library_name` metadata field. This ensures that the model is correctly categorized as being compatible with the `diffusers` library on the Hugging Face Hub.

Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
2
  license: mit
3
  pipeline_tag: image-to-video
 
4
  ---
5
 
6
  <h1 align="center">Diffusion Forcing Transformer</h1>
@@ -23,7 +24,7 @@ pipeline_tag: image-to-video
23
  <h3 align="center"><a href="https://arxiv.org/abs/2502.06764">Paper</a> | <a href="https://boyuan.space/history-guidance">Website</a> | <a href="https://huggingface.co/spaces/kiwhansong/diffusion-forcing-transformer">HuggingFace Demo</a> | <a href="https://github.com/kwsong0113/diffusion-forcing-transformer">GitHub Code</a></h3>
24
  </p>
25
 
26
- This is the official model hub for the paper [**_History-guided Video Diffusion_**](https://arxiv.org/abs/2502.06764). We introduce the **Diffusion Forcing Tranformer (DFoT)**, a novel video diffusion model that designed to generate videos conditioned on an arbitrary number of context frames. Additionally, we present **History Guidance (HG)**, a family of guidance methods uniquely enabled by DFoT. These methods significantly enhance video generation quality, temporal consistency, and motion dynamics, while also unlocking new capabilities such as compositional video generation and the stable rollout of extremely long videos.
27
 
28
 
29
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6613663bcfbba5e761a69531/OcsBrHWZXQidH7YxGCMtS.png)
@@ -44,7 +45,6 @@ Please check it out and have fun generating videos with DFoT!
44
  All pretrained models can be automatically loaded from [our GitHub codebase](https://github.com/kwsong0113/diffusion-forcing-transformer). Please visit our repository for further instructions!
45
 
46
 
47
-
48
  ## 📌 Citation
49
 
50
  If our work is useful for your research, please consider citing our paper:
 
1
  ---
2
  license: mit
3
  pipeline_tag: image-to-video
4
+ library_name: diffusers
5
  ---
6
 
7
  <h1 align="center">Diffusion Forcing Transformer</h1>
 
24
  <h3 align="center"><a href="https://arxiv.org/abs/2502.06764">Paper</a> | <a href="https://boyuan.space/history-guidance">Website</a> | <a href="https://huggingface.co/spaces/kiwhansong/diffusion-forcing-transformer">HuggingFace Demo</a> | <a href="https://github.com/kwsong0113/diffusion-forcing-transformer">GitHub Code</a></h3>
25
  </p>
26
 
27
+ This is the official model hub for the paper [**_History-guided Video Diffusion_**](https://arxiv.org/abs/2502.06764). We introduce the **Diffusion Forcing Tranformer (DFoT)**, a novel video diffusion model that designed to generate videos conditioned on an arbitrary number of context frames. Additionally, we present **History Guidance (HG)**, a family of guidance methods uniquely enabled by DFoT. These methods significantly enhance video generation quality, temporal consistency, and motion dynamics, while also unlocking new capabilities such as compositional video generation and the stable rollout of extremely long videos.
28
 
29
 
30
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6613663bcfbba5e761a69531/OcsBrHWZXQidH7YxGCMtS.png)
 
45
  All pretrained models can be automatically loaded from [our GitHub codebase](https://github.com/kwsong0113/diffusion-forcing-transformer). Please visit our repository for further instructions!
46
 
47
 
 
48
  ## 📌 Citation
49
 
50
  If our work is useful for your research, please consider citing our paper: