Motion 3-to-4: 3D Motion Reconstruction for 4D Synthesis
Motion 3-to-4 reconstructs 3D motion from video inputs for 4D synthesis, enabling the generation of animated 3D models with realistic motion.
Abstract
We present Motion 3-to-4, a feed-forward framework for synthesising high-quality 4D dynamic objects from a single monocular video and an optional 3D reference mesh. While recent advances have significantly improved 2D, video, and 3D content generation, 4D synthesis remains difficult due to limited training data and the inherent ambiguity of recovering geometry and motion from a monocular viewpoint. Motion 3-to-4 addresses these challenges by decomposing 4D synthesis into static 3D shape generation and motion reconstruction. Using a canonical reference mesh, our model learns a compact motion latent representation and predicts per-frame vertex trajectories to recover complete, temporally coherent geometry. A scalable frame-wise transformer further enables robustness to varying sequence lengths. Evaluations on both standard benchmarks and a new dataset with accurate ground-truth geometry show that Motion 3-to-4 delivers superior fidelity and spatial consistency compared to prior work. Project page is available at https://motion3-to-4.github.io/.
Quick Start
git clone https://github.com/Inception3D/Motion324.git
cd Motion324
conda create -n Motion324 python=3.11
conda activate Motion324
pip install -r requirements.txt
# (Optional) Install Hunyuan3D-2.0 modules
cd scripts/hy3dgen/texgen/custom_rasterizer && python3 setup.py install && cd ../../../..
cd scripts/hy3dgen/texgen/differentiable_renderer && python3 setup.py install && cd ../../../..
chmod +x ./scripts/4D_from_existing.sh
./scripts/4D_from_existing.sh ./examples/chili.glb ./examples/chili.mp4 ./examples/output
# Hunyuan needed
chmod +x ./scripts/4D_from_video.sh
./scripts/4D_from_video.sh ./examples/tiger.mp4
Citation
If you find this work useful, please cite:
@article{chen2026motion3to4,
title={Motion 3-to-4: 3D Motion Reconstruction for 4D Synthesis},
author={Hongyuan, Chen and Xingyu, Chen and Youjia Zhang, and Zexiang, Xu and Anpei, Chen},
journal={arXiv preprint arXiv:2601.14253},
year={2026}
}