| | --- |
| | license: mit |
| | task_categories: |
| | - image-to-video |
| | tags: |
| | - video-generation |
| | - motion-control |
| | - point-trajectory |
| | --- |
| | |
| | # MoveBench of Wan-Move |
| |
|
| | <p align="center"> |
| | <img src="assets/wan-move-logo.png" alt="Stanford-Alpaca" style="width: 100%; min-width: 300px; display: block; margin: auto;"> |
| | <p> |
| | |
| | # Wan-Move: Motion-controllable Video Generation via Latent Trajectory Guidance |
| |
|
| | [](https://arxiv.org/abs/xx) |
| | [](https://github.com/ali-vilab/Wan-Move) |
| | [](https://huggingface.co/Ruihang/Wan-Move-14B-480P) |
| | [](https://www.modelscope.cn/models/Ruihang/Wan-Move-14B-480P) |
| | [](https://huggingface.co/Ruihang/MoveBench) |
| | [](https://www.youtube.com/watch?v=_5Cy7Z2NQJQ) |
| | [](https://ruihang-chu.github.io/Wan-Move.html) |
| |
|
| |
|
| |
|
| | ## MoveBench: A Comprehensive and Well-Curated Benchmark to Access Motion Control in Videos |
| |
|
| |
|
| | MoveBench evaluates fine-grained point-level motion control in generated videos. We categorize the video library from [Pexels](https://www.pexels.com/videos/) into 54 content categories, 10-25 videos each, giving rise to 1018 cases to ensure a broad scenario coverage. All video clips maintain a 5-second duration to facilitate evaluation of long-range dynamics. Every clip is paired with detailed motion annotations for a single object. Addtional 192 clips have motion annotations for multiple objects. We ensure annotation quality by developing an interactive labeling pipeline, marrying annotation precision with automated scalability. |
| |
|
| | Welcome everyone to use it! |
| |
|
| |
|
| |
|
| |
|
| | ## Statistics |
| |
|
| | <p align="center" style="border-radius: 10px"> |
| | <img src="assets/construction.png" width="100%" alt="logo"/> |
| | <strong>The contruction pipeline of MoveBench </strong> |
| | </p> |
| |
|
| | <p align="center" style="border-radius: 10px"> |
| | <img src="assets/statistics_1.png" width="100%" alt="logo"/> |
| | <strong>Balanced sample number per video category </strong> |
| | </p> |
| |
|
| | <p align="center" style="border-radius: 10px"> |
| | <img src="assets/statistics_2.png" width="100%" alt="logo"/> |
| | <strong>Comparison with related benchmarks </strong> |
| | </p> |
| |
|
| | ## Download |
| |
|
| |
|
| | Download MoveBench from Hugging Face: |
| | ``` sh |
| | huggingface-cli download Ruihang/MoveBench --local-dir ./MoveBench |
| | ``` |
| |
|
| | Extract the files below: |
| | ``` sh |
| | tar -xzvf en.tar.gz |
| | tar -xzvf zh.tar.gz |
| | ``` |
| |
|
| | The file structure will be: |
| |
|
| | ``` |
| | MoveBench |
| | ├── en # English version |
| | │ ├── single_track.txt |
| | │ ├── multi_track.txt |
| | │ ├── first_frame |
| | │ │ ├── Pexels_videoid_0.jpg |
| | │ │ ├── Pexels_videoid_1.jpg |
| | │ │ ├── ... |
| | │ ├── video |
| | │ │ ├── Pexels_videoid_0.mp4 |
| | │ │ ├── Pexels_videoid_1.mp4 |
| | │ │ ├── ... |
| | │ ├── track |
| | │ │ ├── single |
| | │ │ │ ├── Pexels_videoid_0_tracks.npy |
| | │ │ │ ├── Pexels_videoid_0_visibility.npy |
| | │ │ │ ├── ... |
| | │ │ ├── multi |
| | │ │ │ ├── Pexels_videoid_0_tracks.npy |
| | │ │ │ ├── Pexels_videoid_0_visibility.npy |
| | │ │ │ ├── ... |
| | ├── zh # Chinese version |
| | │ ├── single_track.txt |
| | │ ├── multi_track.txt |
| | │ ├── first_frame |
| | │ │ ├── Pexels_videoid_0.jpg |
| | │ │ ├── Pexels_videoid_1.jpg |
| | │ │ ├── ... |
| | │ ├── video |
| | │ │ ├── Pexels_videoid_0.mp4 |
| | │ │ ├── Pexels_videoid_1.mp4 |
| | │ │ ├── ... |
| | │ ├── track |
| | │ │ ├── single |
| | │ │ │ ├── Pexels_videoid_0_tracks.npy |
| | │ │ │ ├── Pexels_videoid_0_visibility.npy |
| | │ │ │ ├── ... |
| | │ │ ├── multi |
| | │ │ │ ├── Pexels_videoid_0_tracks.npy |
| | │ │ │ ├── Pexels_videoid_0_visibility.npy |
| | │ │ │ ├── ... |
| | ├── bench.py # Evaluation script |
| | ├── utils # Evaluation code modules |
| | ``` |
| |
|
| |
|
| | For evaluation, please refer to [Wan-Move](https://github.com/ali-vilab/Wan-Move) code base. Enjoy it! |
| |
|
| | <!-- |
| | ## Citation |
| | If you find our work helpful, please cite us. |
| |
|
| | ``` |
| | @article{wan2025, |
| | title={Wan: Open and Advanced Large-Scale Video Generative Models}, |
| | author={Team Wan and Ang Wang and Baole Ai and Bin Wen and Chaojie Mao and Chen-Wei Xie and Di Chen and Feiwu Yu and Haiming Zhao and Jianxiao Yang and Jianyuan Zeng and Jiayu Wang and Jingfeng Zhang and Jingren Zhou and Jinkai Wang and Jixuan Chen and Kai Zhu and Kang Zhao and Keyu Yan and Lianghua Huang and Mengyang Feng and Ningyi Zhang and Pandeng Li and Pingyu Wu and Ruihang Chu and Ruili Feng and Shiwei Zhang and Siyang Sun and Tao Fang and Tianxing Wang and Tianyi Gui and Tingyu Weng and Tong Shen and Wei Lin and Wei Wang and Wei Wang and Wenmeng Zhou and Wente Wang and Wenting Shen and Wenyuan Yu and Xianzhong Shi and Xiaoming Huang and Xin Xu and Yan Kou and Yangyu Lv and Yifei Li and Yijing Liu and Yiming Wang and Yingya Zhang and Yitong Huang and Yong Li and You Wu and Yu Liu and Yulin Pan and Yun Zheng and Yuntao Hong and Yupeng Shi and Yutong Feng and Zeyinzi Jiang and Zhen Han and Zhi-Fan Wu and Ziyu Liu}, |
| | journal = {arXiv preprint arXiv:2503.20314}, |
| | year={2025} |
| | } |
| | ``` --> |
| |
|
| |
|
| | ## Contact Us |
| | If you would like to leave a message to our research teams, feel free to drop me an [Email](ruihangchu@gmail.com). |