Add model card and metadata

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +28 -1
README.md CHANGED
@@ -1,4 +1,31 @@
1
  ---
2
  license: mit
 
 
3
  ---
4
- arxiv.org/abs/2508.07409
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ pipeline_tag: image-to-video
4
+ library_name: diffusers
5
  ---
6
+
7
+ # CharacterShot: Controllable and Consistent 4D Character Animation
8
+
9
+ [CharacterShot](https://arxiv.org/abs/2508.07409) is a controllable and consistent 4D character animation framework that enables any individual designer to create dynamic 3D characters from a single reference character image and a 2D pose sequence.
10
+
11
+ - **Paper:** [CharacterShot: Controllable and Consistent 4D Character Animation](https://arxiv.org/abs/2508.07409)
12
+ - **Code:** [GitHub Repository](https://github.com/Jeoyal/CharacterShot)
13
+ - **Authors:** [Junyao Gao](https://huggingface.co/Gaojunyao), [Jiaxing Li](https://huggingface.co/LiJiaxing), Wenran Liu, [Yanhong Zeng](https://huggingface.co/zengyh1900), Fei Shen, Kai Chen, Yanan Sun, Cairong Zhao
14
+
15
+ ## Introduction
16
+
17
+ CharacterShot begins by pretraining a powerful 2D character animation model based on a DiT-based image-to-video model (CogVideoX). It lifts the animation model from 2D to 3D through introducing dual-attention module together with camera prior to generate multi-view videos with spatial-temporal and spatial-view consistency. Finally, it employs a novel neighbor-constrained 4D gaussian splatting optimization on these multi-view videos, resulting in continuous and stable 4D character representations.
18
+
19
+ ## Citation
20
+
21
+ ```bibtex
22
+ @article{gao2025charactershot,
23
+ title={CharacterShot: Controllable and Consistent 4D Character Animation},
24
+ author={Gao, Junyao and Li, Jiaxing and Liu, Wenran and Zeng, Yanhong and Shen, Fei and Chen, Kai and Sun, Yanan and Zhao, Cairong},
25
+ journal={arXiv preprint arXiv:2508.07409},
26
+ year={2025},
27
+ }
28
+ ```
29
+
30
+ ## Acknowledgements
31
+ The code is built upon [CogVideo](https://github.com/THUDM/CogVideo).