import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image, export_to_video
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Gaojunyao/CharacterShot", dtype=torch.bfloat16, device_map="cuda")
pipe.to("cuda")
prompt = "A man with short gray hair plays a red electric guitar."
image = load_image(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png"
)
output = pipe(image=image, prompt=prompt).frames[0]
export_to_video(output, "output.mp4")CharacterShot: Controllable and Consistent 4D Character Animation
CharacterShot is a controllable and consistent 4D character animation framework that enables the creation of dynamic 3D characters (i.e., 4D character animation) from a single reference character image and a 2D pose sequence.
- Paper: CharacterShot: Controllable and Consistent 4D Character Animation
- Repository: https://github.com/Jeoyal/CharacterShot
Introduction
CharacterShot utilizes a powerful 2D character animation model based on a DiT image-to-video architecture. It lifts these animations to 3D using dual-attention modules and camera priors to ensure spatial-temporal and spatial-view consistency. The final representation is optimized using neighbor-constrained 4D Gaussian Splatting, resulting in stable and continuous character representations.
The model was trained on Character4D, a large-scale dataset containing 13,115 unique characters with diverse appearances and motions.
Citation
@article{gao2025charactershot,
title={CharacterShot: Controllable and Consistent 4D Character Animation},
author={Gao, Junyao and Li, Jiaxing and Liu, Wenran and Zeng, Yanhong and Shen, Fei and Chen, Kai and Sun, Yanan and Zhao, Cairong},
journal={arXiv preprint arXiv:2508.07409},
year={2025}
}
Acknowledgements
The code is built upon CogVideo.
- Downloads last month
- 9