nielsr HF Staff commited on
Commit
54437d9
·
verified ·
1 Parent(s): aa69eb1

Add metadata and improve model card

Browse files

This PR improves the model card by:
- Adding the `image-to-3d` pipeline tag and the `cc-by-nc-sa-4.0` license to the metadata.
- Adding a link to the official GitHub repository.
- Shortening the abstract for better readability.
- Including a "Quick Start" section with installation and inference instructions found in the repository.

Files changed (1) hide show
  1. README.md +22 -4
README.md CHANGED
@@ -1,15 +1,22 @@
 
 
 
 
 
1
  # Motion 3-to-4: 3D Motion Reconstruction for 4D Synthesis
2
 
3
- **Motion 3-to-4** reconstructs 3D motion from video inputs for 4D synthesis, enabling the generation of animated 3D models with realistic motion.
4
 
5
- [Paper](https://arxiv.org/abs/2601.14253) | [Project Page](https://motion3-to-4.github.io/)
6
 
7
  ## Abstract
8
 
9
- We present Motion 3-to-4, a feed-forward framework for synthesising high-quality 4D dynamic objects from a single monocular video and an optional 3D reference mesh. While recent advances have significantly improved 2D, video, and 3D content generation, 4D synthesis remains difficult due to limited training data and the inherent ambiguity of recovering geometry and motion from a monocular viewpoint. Motion 3-to-4 addresses these challenges by decomposing 4D synthesis into static 3D shape generation and motion reconstruction. Using a canonical reference mesh, our model learns a compact motion latent representation and predicts per-frame vertex trajectories to recover complete, temporally coherent geometry. A scalable frame-wise transformer further enables robustness to varying sequence lengths. Evaluations on both standard benchmarks and a new dataset with accurate ground-truth geometry show that Motion 3-to-4 delivers superior fidelity and spatial consistency compared to prior work. Project page is available at https://motion3-to-4.github.io/.
10
 
11
  ## Quick Start
12
 
 
 
13
  ```bash
14
  git clone https://github.com/Inception3D/Motion324.git
15
  cd Motion324
@@ -21,11 +28,22 @@ pip install -r requirements.txt
21
  # (Optional) Install Hunyuan3D-2.0 modules
22
  cd scripts/hy3dgen/texgen/custom_rasterizer && python3 setup.py install && cd ../../../..
23
  cd scripts/hy3dgen/texgen/differentiable_renderer && python3 setup.py install && cd ../../../..
 
 
 
 
 
24
 
 
 
 
25
  chmod +x ./scripts/4D_from_existing.sh
26
  ./scripts/4D_from_existing.sh ./examples/chili.glb ./examples/chili.mp4 ./examples/output
 
27
 
28
- # Hunyuan needed
 
 
29
  chmod +x ./scripts/4D_from_video.sh
30
  ./scripts/4D_from_video.sh ./examples/tiger.mp4
31
  ```
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ pipeline_tag: image-to-3d
4
+ ---
5
+
6
  # Motion 3-to-4: 3D Motion Reconstruction for 4D Synthesis
7
 
8
+ **Motion 3-to-4** reconstructs 3D motion from video inputs for 4D synthesis, enabling the generation of animated 3D models with realistic motion in a feed-forward manner.
9
 
10
+ [Paper](https://arxiv.org/abs/2601.14253) | [Project Page](https://motion3-to-4.github.io/) | [Code](https://github.com/Inception3D/Motion324)
11
 
12
  ## Abstract
13
 
14
+ Motion 3-to-4 is a feed-forward framework for synthesising high-quality 4D dynamic objects from a single monocular video and an optional 3D reference mesh. It addresses challenges in 4D synthesis by decomposing the task into static 3D shape generation and motion reconstruction. Using a canonical reference mesh, the model learns a compact motion latent representation and predicts per-frame vertex trajectories to recover complete, temporally coherent geometry. A scalable frame-wise transformer further enables robustness to varying sequence lengths.
15
 
16
  ## Quick Start
17
 
18
+ ### Installation
19
+
20
  ```bash
21
  git clone https://github.com/Inception3D/Motion324.git
22
  cd Motion324
 
28
  # (Optional) Install Hunyuan3D-2.0 modules
29
  cd scripts/hy3dgen/texgen/custom_rasterizer && python3 setup.py install && cd ../../../..
30
  cd scripts/hy3dgen/texgen/differentiable_renderer && python3 setup.py install && cd ../../../..
31
+ ```
32
+
33
+ ### Inference
34
+
35
+ Download the pre-trained checkpoints and place them in `experiments/checkpoints/`.
36
 
37
+ **Reconstruct 4D from an existing mesh and video:**
38
+
39
+ ```bash
40
  chmod +x ./scripts/4D_from_existing.sh
41
  ./scripts/4D_from_existing.sh ./examples/chili.glb ./examples/chili.mp4 ./examples/output
42
+ ```
43
 
44
+ **Generate 4D animation from a single video input (requires Hunyuan):**
45
+
46
+ ```bash
47
  chmod +x ./scripts/4D_from_video.sh
48
  ./scripts/4D_from_video.sh ./examples/tiger.mp4
49
  ```