AhmadMustafa commited on
Commit
403531c
·
1 Parent(s): 0f8fca2

Add Hugging Face Spaces metadata to README

Browse files
Files changed (1) hide show
  1. README.md +41 -88
README.md CHANGED
@@ -1,101 +1,43 @@
1
- ## Sci-Fi: Symmetric Constraint for Frame Inbetweening
2
- <h5>Liuhan Chen<sup>1</sup>, <a href='https://vinthony.github.io'>Xiaodong Cun</a><sup>2,*</sup>, <a href='https://xiaoyu258.github.io/'>Xiaoyu Li</a><sup>3</sup>, Xianyi He<sup>1,4</sup>, Shenghai Yuan<sup>1,4</sup>, Jie Chen<sup>1</sup>, Ying Shan<sup>3</sup>, Li Yuan<sup>1,*</sup></h5>
 
 
 
 
 
 
 
 
 
3
 
4
- <sup>1</sup>Shenzhen Graduate School, Peking University &nbsp;&nbsp;&nbsp; <sup>2</sup><a href='https://gvclab.github.io'>GVC Lab, Great Bay University</a> &nbsp;&nbsp;&nbsp;
5
- <sup>3</sup>ARC Lab, Tencent PCG &nbsp;&nbsp;&nbsp; <sup>4</sup>Rabbitpre Intelligence
6
 
7
- We have updated our paper with a new version and chane the name of our framework from Sci-Fi to EF-VI.
8
- **[Arxiv](https://arxiv.org/abs/2505.21205) | [PDF](https://arxiv.org/pdf/2505.21205)**
9
 
10
- ## Video demos
11
- [![Video demos of our Sci-Fi](overview/video_demos.png)](https://youtu.be/_YfFH-uNYQk)
12
 
13
- [or click here to download the compressed version](overview/video_demos.mp4)
 
 
 
14
 
15
- ## Method comparison
16
- <div align="center">
17
- <img src="overview/comparison.png" width="720" alt="Comparison">
18
- </div>
19
- <strong>(a)</strong> In current I2V-DM-based methods, the end-frame constraint is weaker than the start-frame constraint due to the same injection mechanism but a smaller training scale, causing a distorted predicted path with collapsed content.<br><be> <strong>(b)</strong> Our Sci-Fi maintains start frame processing while enhancing end-frame constraint injection. This achieves symmetric start-end-frame constraints with small training, yielding a fine predicted path close to the real one with smoother inbetweening.
20
 
21
- ## Some challenging examples of our Sci-Fi for frame inbetweening.
22
- <table class="center">
23
- <tr style="font-weight: bolder;text-align:center;">
24
- <td>Start frame</td>
25
- <td>End frame</td>
26
- <td>Generated video</td>
27
- </tr>
28
- <tr>
29
- <td>
30
- <img src=example_input_pairs/input_pair1/start.jpg width="250">
31
- </td>
32
- <td>
33
- <img src=example_input_pairs/input_pair1/end.jpg width="250">
34
- </td>
35
- <td>
36
- <img src=example_output_gifs/input_pair1.gif width="250" loop="infinite">
37
- </td>
38
- </tr>
39
- <tr>
40
- <td>
41
- <img src=example_input_pairs/input_pair2/start.jpg width="250">
42
- </td>
43
- <td>
44
- <img src=example_input_pairs/input_pair2/end.jpg width="250">
45
- </td>
46
- <td>
47
- <img src=example_output_gifs/input_pair2.gif width="250">
48
- </td>
49
- </tr>
50
- <tr>
51
- <td>
52
- <img src=example_input_pairs/input_pair3/start.jpg width="250">
53
- </td>
54
- <td>
55
- <img src=example_input_pairs/input_pair3/end.jpg width="250">
56
- </td>
57
- <td>
58
- <img src=example_output_gifs/input_pair3.gif width="250">
59
- </td>
60
- </tr>
61
- <tr>
62
- <td>
63
- <img src=example_input_pairs/input_pair4/start.jpg width="250">
64
- </td>
65
- <td>
66
- <img src=example_input_pairs/input_pair4/end.jpg width="250">
67
- </td>
68
- <td>
69
- <img src=example_output_gifs/input_pair4.gif width="250">
70
- </td>
71
- </tr>
72
- </table >
73
 
74
- ## Deployment for personal use
75
- ### 1. Setup the repository and environment
76
- ```
77
- git clone https://github.com/GVCLab/Sci-Fi.git
78
- cd Sci-Fi
79
- conda create -n Sci-Fi python==3.12
80
- conda activate Sci-Fi
81
- pip install -r requirements.txt
82
- ```
83
- ### 2. Download checkpoint
84
- Download the CogVideoX-5B-I2V model (due to fine-tuning, the weights of the transformer denoiser are different from the original) and EF-Net.
85
- The weights are available at [🤗HuggingFace](https://huggingface.co/LiuhanChen/Sci-Fi) and [🤖ModelScope](https://www.modelscope.cn/models/clhxclh/Sci-Fi).
86
 
87
- ### 3. Launch the inference script!
88
- The example input keyframe pairs are in `examples/` folder, and
89
- the corresponding generated videos (720x480, 49 frames) are placed in `outputs/` folder.
90
- </br>
91
- To interpolate, run:
92
- ```
93
- bash Sci_Fi_frame_inbetweening.sh
94
- ```
95
 
96
  ## Citation
97
- 🌟 If you find our work helpful, please leave us a star and cite our paper.
98
- ```
 
 
99
  @article{chen2025sci,
100
  title={Sci-Fi: Symmetric Constraint for Frame Inbetweening},
101
  author={Chen, Liuhan and Cun, Xiaodong and Li, Xiaoyu and He, Xianyi and Yuan, Shenghai and Chen, Jie and Shan, Ying and Yuan, Li},
@@ -103,3 +45,14 @@ bash Sci_Fi_frame_inbetweening.sh
103
  year={2025}
104
  }
105
  ```
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Sci-Fi Frame Inbetweening
3
+ emoji: 🎬
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: gradio
7
+ sdk_version: 4.44.0
8
+ app_file: gradio_inference.py
9
+ pinned: false
10
+ license: apache-2.0
11
+ ---
12
 
13
+ # Sci-Fi: Symmetric Constraint for Frame Inbetweening
 
14
 
15
+ Generate smooth frame interpolation videos using CogVideoX-5B with EF-Net.
 
16
 
17
+ ## Features
 
18
 
19
+ - 🎬 High-quality frame inbetweening
20
+ - ⚡ GPU-accelerated inference with Spaces GPU
21
+ - 🎨 Customizable generation parameters
22
+ - 📊 Real-time progress tracking
23
 
24
+ ## Usage
 
 
 
 
25
 
26
+ 1. **Setup Tab**: Load the model pipeline (required first step)
27
+ 2. **Generate Tab**: Upload start and end frames, enter a prompt
28
+ 3. **Advanced Settings**: Fine-tune generation parameters
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
+ ## Model Details
 
 
 
 
 
 
 
 
 
 
 
31
 
32
+ - **Base Model**: CogVideoX-5B-I2V
33
+ - **EF-Net**: Custom enhancement network for symmetric frame constraints
34
+ - **Output**: 49 frames at 7 FPS (720x480)
 
 
 
 
 
35
 
36
  ## Citation
37
+
38
+ If you find this work helpful, please cite:
39
+
40
+ ```bibtex
41
  @article{chen2025sci,
42
  title={Sci-Fi: Symmetric Constraint for Frame Inbetweening},
43
  author={Chen, Liuhan and Cun, Xiaodong and Li, Xiaoyu and He, Xianyi and Yuan, Shenghai and Chen, Jie and Shan, Ying and Yuan, Li},
 
45
  year={2025}
46
  }
47
  ```
48
+
49
+ ## Links
50
+
51
+ - 📄 [Paper](https://arxiv.org/abs/2505.21205)
52
+ - 🐙 [GitHub](https://github.com/GVCLab/Sci-Fi)
53
+ - 🤗 [Model Weights](https://huggingface.co/LiuhanChen/Sci-Fi)
54
+
55
+ ## Requirements
56
+
57
+ - Model weights must be downloaded and placed in the appropriate directory
58
+ - See the Setup tab for configuration details