Instructions to use Gigizz/OpenS2V-Weight with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Gigizz/OpenS2V-Weight with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Gigizz/OpenS2V-Weight", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
| base_model: | |
| - Wan-AI/Wan2.1-T2V-14B | |
| datasets: | |
| - BestWishYsh/OpenS2V-Eval | |
| - BestWishYsh/OpenS2V-5M | |
| language: | |
| - en | |
| license: apache-2.0 | |
| pipeline_tag: text-to-video | |
| library_name: diffusers | |
| <div align=center> | |
| <img src="https://github.com/PKU-YuanGroup/OpenS2V-Nexus/blob/main/__assets__/OpenS2V-Nexus_logo.png?raw=true" width="300px"> | |
| </div> | |
| <h2 align="center"> <a href="https://pku-yuangroup.github.io/OpenS2V-Nexus/">OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation</a></h2> | |
| <h5 align="center"> If you like our project, please give us a star ⭐ on GitHub for the latest update. </h5> | |
| ## ✨ Summary | |
| 1. **New S2V Benchmark.** | |
| - We introduce *OpenS2V-Eval* for comprehensive evaluation of S2V models and propose three new automatic metrics aligned with human perception. | |
| 2. **New Insights for S2V Model Selection.** | |
| - Our evaluations using *OpenS2V-Eval* provide crucial insights into the strengths and weaknesses of various subject-to-video generation models. | |
| 3. **Million-Scale S2V Dataset.** | |
| - We create *OpenS2V-5M*, a dataset with 5.1M high-quality regular data and 0.35M Nexus Data, the latter is expected to address the three core challenges of subject-to-video. | |
| ## 💡 Description | |
| - **Repository:** [Code](https://github.com/PKU-YuanGroup/OpenS2V-Nexus), [Page](https://pku-yuangroup.github.io/OpenS2V-Nexus/), [Dataset](https://huggingface.co/datasets/BestWishYsh/OpenS2V-5M), [Benchmark](https://huggingface.co/datasets/BestWishYsh/OpenS2V-Eval) | |
| - **Paper:** [https://huggingface.co/papers/2505.20292](https://huggingface.co/papers/2505.20292) | |
| - **Point of Contact:** [Shenghai Yuan](shyuan-cs@hotmail.com) | |
| ## ✏️ Citation | |
| If you find our paper and code useful in your research, please consider giving a star and citation. | |
| ```BibTeX | |
| @article{yuan2025opens2v, | |
| title={OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation}, | |
| author={Yuan, Shenghai and He, Xianyi and Deng, Yufan and Ye, Yang and Huang, Jinfa and Lin, Bin and Luo, Jiebo and Yuan, Li}, | |
| journal={arXiv preprint arXiv:2505.20292}, | |
| year={2025} | |
| } | |
| ``` |