Motion324 / README.md
nielsr's picture
nielsr HF Staff
Add metadata and improve model card
54437d9 verified
|
raw
history blame
2.24 kB
metadata
license: cc-by-nc-sa-4.0
pipeline_tag: image-to-3d

Motion 3-to-4: 3D Motion Reconstruction for 4D Synthesis

Motion 3-to-4 reconstructs 3D motion from video inputs for 4D synthesis, enabling the generation of animated 3D models with realistic motion in a feed-forward manner.

Paper | Project Page | Code

Abstract

Motion 3-to-4 is a feed-forward framework for synthesising high-quality 4D dynamic objects from a single monocular video and an optional 3D reference mesh. It addresses challenges in 4D synthesis by decomposing the task into static 3D shape generation and motion reconstruction. Using a canonical reference mesh, the model learns a compact motion latent representation and predicts per-frame vertex trajectories to recover complete, temporally coherent geometry. A scalable frame-wise transformer further enables robustness to varying sequence lengths.

Quick Start

Installation

git clone https://github.com/Inception3D/Motion324.git
cd Motion324

conda create -n Motion324 python=3.11
conda activate Motion324
pip install -r requirements.txt

# (Optional) Install Hunyuan3D-2.0 modules
cd scripts/hy3dgen/texgen/custom_rasterizer && python3 setup.py install && cd ../../../..
cd scripts/hy3dgen/texgen/differentiable_renderer && python3 setup.py install && cd ../../../..

Inference

Download the pre-trained checkpoints and place them in experiments/checkpoints/.

Reconstruct 4D from an existing mesh and video:

chmod +x ./scripts/4D_from_existing.sh
./scripts/4D_from_existing.sh ./examples/chili.glb ./examples/chili.mp4 ./examples/output

Generate 4D animation from a single video input (requires Hunyuan):

chmod +x ./scripts/4D_from_video.sh
./scripts/4D_from_video.sh ./examples/tiger.mp4

Citation

If you find this work useful, please cite:

@article{chen2026motion3to4,
    title={Motion 3-to-4: 3D Motion Reconstruction for 4D Synthesis},
    author={Hongyuan, Chen and Xingyu, Chen and Youjia Zhang, and Zexiang, Xu and Anpei, Chen},
    journal={arXiv preprint arXiv:2601.14253},
    year={2026}
}