Improve model card: Add pipeline tag, paper, project page, and code links

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +33 -3
README.md CHANGED
@@ -1,3 +1,33 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ pipeline_tag: image-to-video
4
+ ---
5
+
6
+ # The World is Your Canvas: Painting Promptable Events with Reference Images, Trajectories, and Text
7
+
8
+ WorldCanvas is an I2V framework for promptable world events that enables rich, user-directed simulation by combining text, trajectories, and reference images. It allows for the generation of coherent, controllable events that include multi-agent interactions, object entry/exit, reference-guided appearance, and counterintuitive events.
9
+
10
+ - **πŸ“„ Paper**: [The World is Your Canvas: Painting Promptable Events with Reference Images, Trajectories, and Text](https://huggingface.co/papers/2512.16924)
11
+ - **🌐 Project Page**: https://worldcanvas.github.io/
12
+ - **πŸ’» Code on GitHub**: https://github.com/pPetrichor/WorldCanvas
13
+
14
+ <div align="center">
15
+ <img src="https://github.com/user-attachments/assets/cc8f7fd6-fd89-47e9-b2bf-38298131d1f7" alt="WorldCanvas Demo" width="100%">
16
+ </div>
17
+
18
+ ## Setup and Inference
19
+
20
+ For detailed setup, checkpoint download, and comprehensive inference instructions (with or without reference images), please refer to the guide on the [official GitHub repository](https://github.com/pPetrichor/WorldCanvas#setup). The repository provides command-line steps and Gradio interfaces for generating conditions and videos.
21
+
22
+ ## Citation
23
+
24
+ If you find this work useful, please consider citing our paper:
25
+
26
+ ```bibtex
27
+ @article{wang2025worldcanvas,
28
+ title={The World is Your Canvas: Painting Promptable Events with Reference Images, Trajectories, and Text},
29
+ author={Hanlin Wang and Hao Ouyang and Qiuyu Wang and Yue Yu and Yihao Meng and Wen Wang and Ka Leong Cheng and Shuailei Ma and Qingyan Bai and Yixuan Li and Cheng Chen and Yanhong Zeng and Xing Zhu and Yujun Shen and Qifeng Chen},
30
+ journal={arXiv preprint arXiv:2512.16924},
31
+ year={2025}
32
+ }
33
+ ```