Image-to-Video
Diffusers
Safetensors
English
sunny1001 commited on
Commit
7ef5940
·
verified ·
1 Parent(s): 2677b05

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -78
README.md CHANGED
@@ -1,79 +1,83 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
-
5
- <h1 align="center">
6
- BindWeave: Subject-Consistent Video Generation via Cross-Modal Integration
7
- </h1>
8
-
9
-
10
- <div align="center">
11
-
12
- [![arXiv](https://img.shields.io/badge/arXiv%20paper-2510.00438-b31b1b.svg)](https://arxiv.org/pdf/2510.00438)&nbsp;
13
- [![project page](https://img.shields.io/badge/Project_page-More_visualizations-green)](https://lzy-dot.github.io/BindWeave/)&nbsp;
14
- <a href="https://huggingface.co/ByteDance/BindWeave"><img src="https://img.shields.io/static/v1?label=%F0%9F%A4%97%20Hugging%20Face&message=Model&color=orange"></a>
15
- </div>
16
-
17
-
18
- <p align="center">
19
- <a href="https://arxiv.org/abs/2502.11079"><strong>BindWeave: Subject-Consistent Video Generation via Cross-Modal Integration</strong></a>
20
- </p>
21
-
22
- <div align="center">
23
- <p>
24
- <a href="https://scholar.google.com/citations?user=WelDcqkAAAAJ&hl=zh-CN">Zhaoyang Li</a><sup> 1,2</sup>,
25
- <a href="https://openreview.net/profile?id=~Dongjun_Qian1">Dongjun Qian</a><sup> 2</sup>,
26
- <a href="https://scholar.google.com/citations?user=Kp3XAToAAAAJ&hl=zh-CN">Kai Su</a><sup> 2*</sup>,
27
- <a href="https://scholar.google.com/citations?user=G6xrfhYAAAAJ&hl=zh-CN">Qishuai Diao</a><sup> 2</sup>,
28
- <a href="https://openreview.net/profile?id=~Xiangyang_Xia1">Xiangyang Xia</a><sup> 2</sup>,
29
- <a href="https://openreview.net/profile?id=~Chang_Liu71">Chang Liu</a><sup> 2</sup>,
30
- <a href="https://scholar.google.com/citations?user=rtO5VmQAAAAJ&hl=zh-CN">Wenfei Yang</a><sup> 1</sup>,
31
- <a href="https://scholar.google.com/citations?user=9sCGe-gAAAAJ&hl=en">Tianzhu Zhang</a><sup> 1*</sup>,
32
- <a href="https://shallowyuan.github.io/">Zehuan Yuan</a><sup> 2</sup>
33
- </p>
34
- <p>
35
- <small>
36
- <sup>1</sup>University of Science and Technology of China <sup>2</sup>ByteDance
37
- <br>
38
- <sup>*</sup>Corresponding Author
39
- </small>
40
- </p>
41
- </div>
42
-
43
-
44
- <p align="center">
45
- <img src="assets/figure1.png" width=95%>
46
- <p>
47
-
48
-
49
- ## 📖 Overview
50
- BindWeave is a unified subject-consistent video generation framework for single- and multi-subject prompts, built on an MLLM-DiT architecture that couples a pretrained multimodal large language model with a diffusion transformer.
51
- It achieves cross-modal integration via entity grounding and representation alignment, leveraging the MLLM to parse complex prompts and produce subject-aware hidden states that condition the DiT for high-fidelity generation. For more details or tutorials refer to [ByteDance/BindWeave](https://github.com/bytedance/BindWeave)
52
-
53
-
54
- ### OpenS2V-Eval Performance 🏆
55
- BindWeave achieves a solid score of 57.61 on the [OpenS2V-Eval](https://huggingface.co/spaces/BestWishYsh/OpenS2V-Eval) benchmark, highlighting its robust capabilities across multiple evaluation dimensions and demonstrating competitive performance against several leading open-source and commercial systems.
56
-
57
- | Model | TotalScore↑ | AestheticScore↑ | MotionSmoothness↑ | MotionAmplitude↑ | FaceSim↑ | GmeScore↑ | NexusScore↑ | NaturalScore↑ |
58
- |------|----|----|----|----|----|----|----|----|
59
- | [BindWeave](https://lzy-dot.github.io/BindWeave/) | 57.61% | 45.55% | 95.90% | 13.91% | 53.71% | 67.79% | 46.84% | 66.85% |
60
- | [VACE-14B](https://github.com/ali-vilab/VACE) | 57.55% | 47.21% | 94.97% | 15.02% | 55.09% | 67.27% | 44.08% | 67.04% |
61
- | [Phantom-14B](https://github.com/Phantom-video/Phantom) | 56.77% | 46.39% | 96.31% | 33.42% | 51.46% | 70.65% | 37.43% | 69.35% |
62
- | [Kling1.6(20250503)](https://app.klingai.com/cn/) | 56.23% | 44.59% | 86.93% | 41.6% | 40.1% | 66.2% | 45.89% | 74.59% |
63
- | [Phantom-1.3B](https://github.com/Phantom-video/Phantom) | 54.89% | 46.67% | 93.3% | 14.29% | 48.56% | 69.43% | 42.48% | 62.5% |
64
- | [MAGREF-480P](https://github.com/MAGREF-Video/MAGREF) | 52.51% | 45.02% | 93.17% | 21.81% | 30.83% | 70.47% | 43.04% | 66.9% |
65
- | [SkyReels-A2-P14B](https://github.com/SkyworkAI/SkyReels-A2) | 52.25% | 39.41% | 87.93% | 25.6% | 45.95% | 64.54% | 43.75% | 60.32% |
66
- | [Vidu2.0(20250503)](https://www.vidu.cn/) | 51.95% | 41.48% | 90.45% | 13.52% | 35.11% | 67.57% | 43.37% | 65.88% |
67
- | [Pika2.1(20250503)](https://pika.art/) | 51.88% | 46.88% | 87.06% | 24.71% | 30.38% | 69.19% | 45.4% | 63.32% |
68
- | [VACE-1.3B](https://github.com/ali-vilab/VACE) | 49.89% | 48.24% | 97.2% | 18.83% | 20.57% | 71.26% | 37.91% | 65.46% |
69
- | [VACE-P1.3B](https://github.com/ali-vilab/VACE) | 48.98% | 47.34% | 96.8% | 12.03% | 16.59% | 71.38% | 40.19% | 64.31% |
70
-
71
-
72
- ### BibTeX
73
- ```bibtex
74
- @article{li2025bindweave,
75
- title={BindWeave: Subject-Consistent Video Generation via Cross-Modal Integration},
76
- author={Li, Zhaoyang and Qian, Dongjun and Su, Kai and Diao, Qishuai and Xia, Xiangyang and Liu, Chang and Yang, Wenfei and Zhang, Tianzhu and Yuan, Zehuan},
77
- journal={arXiv preprint arXiv:2510.00438},
78
- year={2025}
 
 
 
 
79
  }
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: image-to-video
6
+ library_name: diffusers
7
+ ---
8
+
9
+ <h1 align="center">
10
+ BindWeave: Subject-Consistent Video Generation via Cross-Modal Integration
11
+ </h1>
12
+
13
+
14
+ <div align="center">
15
+
16
+ [![arXiv](https://img.shields.io/badge/arXiv%20paper-2510.00438-b31b1b.svg)](https://arxiv.org/pdf/2510.00438)&nbsp;
17
+ [![project page](https://img.shields.io/badge/Project_page-More_visualizations-green)](https://lzy-dot.github.io/BindWeave/)&nbsp;
18
+ <a href="https://huggingface.co/ByteDance/BindWeave"><img src="https://img.shields.io/static/v1?label=%F0%9F%A4%97%20Hugging%20Face&message=Model&color=orange"></a>
19
+ </div>
20
+
21
+
22
+ <p align="center">
23
+ <a href="https://arxiv.org/abs/2502.11079"><strong>BindWeave: Subject-Consistent Video Generation via Cross-Modal Integration</strong></a>
24
+ </p>
25
+
26
+ <div align="center">
27
+ <p>
28
+ <a href="https://scholar.google.com/citations?user=WelDcqkAAAAJ&hl=zh-CN">Zhaoyang Li</a><sup> 1,2</sup>,
29
+ <a href="https://openreview.net/profile?id=~Dongjun_Qian1">Dongjun Qian</a><sup> 2</sup>,
30
+ <a href="https://scholar.google.com/citations?user=Kp3XAToAAAAJ&hl=zh-CN">Kai Su</a><sup> 2*</sup>,
31
+ <a href="https://scholar.google.com/citations?user=G6xrfhYAAAAJ&hl=zh-CN">Qishuai Diao</a><sup> 2</sup>,
32
+ <a href="https://openreview.net/profile?id=~Xiangyang_Xia1">Xiangyang Xia</a><sup> 2</sup>,
33
+ <a href="https://openreview.net/profile?id=~Chang_Liu71">Chang Liu</a><sup> 2</sup>,
34
+ <a href="https://scholar.google.com/citations?user=rtO5VmQAAAAJ&hl=zh-CN">Wenfei Yang</a><sup> 1</sup>,
35
+ <a href="https://scholar.google.com/citations?user=9sCGe-gAAAAJ&hl=en">Tianzhu Zhang</a><sup> 1*</sup>,
36
+ <a href="https://shallowyuan.github.io/">Zehuan Yuan</a><sup> 2</sup>
37
+ </p>
38
+ <p>
39
+ <small>
40
+ <sup>1</sup>University of Science and Technology of China <sup>2</sup>ByteDance
41
+ <br>
42
+ <sup>*</sup>Corresponding Author
43
+ </small>
44
+ </p>
45
+ </div>
46
+
47
+
48
+ <p align="center">
49
+ <img src="assets/figure1.png" width=95%>
50
+ <p>
51
+
52
+
53
+ ## 📖 Overview
54
+ BindWeave is a unified subject-consistent video generation framework for single- and multi-subject prompts, built on an MLLM-DiT architecture that couples a pretrained multimodal large language model with a diffusion transformer.
55
+ It achieves cross-modal integration via entity grounding and representation alignment, leveraging the MLLM to parse complex prompts and produce subject-aware hidden states that condition the DiT for high-fidelity generation. For more details or tutorials refer to [ByteDance/BindWeave](https://github.com/bytedance/BindWeave)
56
+
57
+
58
+ ### OpenS2V-Eval Performance 🏆
59
+ BindWeave achieves a solid score of 57.61 on the [OpenS2V-Eval](https://huggingface.co/spaces/BestWishYsh/OpenS2V-Eval) benchmark, highlighting its robust capabilities across multiple evaluation dimensions and demonstrating competitive performance against several leading open-source and commercial systems.
60
+
61
+ | Model | TotalScore↑ | AestheticScore↑ | MotionSmoothness↑ | MotionAmplitude↑ | FaceSim↑ | GmeScore↑ | NexusScore↑ | NaturalScore↑ |
62
+ |------|----|----|----|----|----|----|----|----|
63
+ | [BindWeave](https://lzy-dot.github.io/BindWeave/) | 57.61% | 45.55% | 95.90% | 13.91% | 53.71% | 67.79% | 46.84% | 66.85% |
64
+ | [VACE-14B](https://github.com/ali-vilab/VACE) | 57.55% | 47.21% | 94.97% | 15.02% | 55.09% | 67.27% | 44.08% | 67.04% |
65
+ | [Phantom-14B](https://github.com/Phantom-video/Phantom) | 56.77% | 46.39% | 96.31% | 33.42% | 51.46% | 70.65% | 37.43% | 69.35% |
66
+ | [Kling1.6(20250503)](https://app.klingai.com/cn/) | 56.23% | 44.59% | 86.93% | 41.6% | 40.1% | 66.2% | 45.89% | 74.59% |
67
+ | [Phantom-1.3B](https://github.com/Phantom-video/Phantom) | 54.89% | 46.67% | 93.3% | 14.29% | 48.56% | 69.43% | 42.48% | 62.5% |
68
+ | [MAGREF-480P](https://github.com/MAGREF-Video/MAGREF) | 52.51% | 45.02% | 93.17% | 21.81% | 30.83% | 70.47% | 43.04% | 66.9% |
69
+ | [SkyReels-A2-P14B](https://github.com/SkyworkAI/SkyReels-A2) | 52.25% | 39.41% | 87.93% | 25.6% | 45.95% | 64.54% | 43.75% | 60.32% |
70
+ | [Vidu2.0(20250503)](https://www.vidu.cn/) | 51.95% | 41.48% | 90.45% | 13.52% | 35.11% | 67.57% | 43.37% | 65.88% |
71
+ | [Pika2.1(20250503)](https://pika.art/) | 51.88% | 46.88% | 87.06% | 24.71% | 30.38% | 69.19% | 45.4% | 63.32% |
72
+ | [VACE-1.3B](https://github.com/ali-vilab/VACE) | 49.89% | 48.24% | 97.2% | 18.83% | 20.57% | 71.26% | 37.91% | 65.46% |
73
+ | [VACE-P1.3B](https://github.com/ali-vilab/VACE) | 48.98% | 47.34% | 96.8% | 12.03% | 16.59% | 71.38% | 40.19% | 64.31% |
74
+
75
+
76
+ ### BibTeX
77
+ ```bibtex
78
+ @article{li2025bindweave,
79
+ title={BindWeave: Subject-Consistent Video Generation via Cross-Modal Integration},
80
+ author={Li, Zhaoyang and Qian, Dongjun and Su, Kai and Diao, Qishuai and Xia, Xiangyang and Liu, Chang and Yang, Wenfei and Zhang, Tianzhu and Yuan, Zehuan},
81
+ journal={arXiv preprint arXiv:2510.00438},
82
+ year={2025}
83
  }