File size: 7,709 Bytes
ad44ad4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 |
<img src="assets/teaser2.webp" width="100%" alt="Teaser Image">
<br>
<a href="https://arxiv.org/pdf/2503.16421"><img src="https://img.shields.io/static/v1?label=Paper&message=2503.16421&color=red&logo=arxiv"></a>
<a href="https://quanhaol.github.io/magicmotion-site/"><img src="https://img.shields.io/static/v1?label=Project&message=Page&color=green&logo=github-pages"></a>
<a href="https://huggingface.co/quanhaol/MagicMotion"><img src="https://img.shields.io/badge/π€_HuggingFace-Model-ffbd45.svg" alt="HuggingFace"></a>
<a href="https://huggingface.co/datasets/quanhaol/MagicData"><img src="https://img.shields.io/badge/π€_HuggingFace-Dataset-ffbd45.svg" alt="HuggingFace"></a>
> **MagicMotion: Controllable Video Generation with Dense-to-Sparse Trajectory Guidance**
> <br>
> [Quanhao Li\*](https://github.com/quanhaol), [Zhen Xing\*](https://chenhsing.github.io/), [Rui Wang](https://scholar.google.com/citations?user=116smmsAAAAJ&hl=en), [Hui Zhang](https://huizhang0812.github.io/), [Qi Dai](https://daiqi1989.github.io/), and [Zuxuan Wu](https://zxwu.azurewebsites.net/)
> <br>
\* equal contribution
## π‘ Abstract
Recent advances in video generation have led to remarkable improvements in visual quality and temporal coherence. Upon this, trajectory-controllable video generation has emerged to enable precise object motion control through explicitly defined spatial paths.
However, existing methods struggle with complex object movements and multi-object motion control, resulting in imprecise trajectory adherence, poor object consistency, and compromised visual quality.
Furthermore, these methods only support trajectory control in a single format, limiting their applicability in diverse scenarios.
Additionally, there is no publicly available dataset or benchmark specifically tailored for trajectory-controllable video generation, hindering robust training and systematic evaluation.
To address these challenges, we introduce **MagicMotion**, a novel image-to-video generation framework that enables trajectory control through three levels of conditions from dense to sparse: masks, bounding boxes, and sparse boxes. Given an input image and trajectories, MagicMotion seamlessly animates objects along defined trajectories while maintaining object consistency and visual quality.
Furthermore, we present **MagicData**, a large-scale trajectory-controlled video dataset, along with an automated pipeline for annotation and filtering.
We also introduce **MagicBench**, a comprehensive benchmark that assesses both video quality and trajectory control accuracy across different numbers of objects.
Extensive experiments demonstrate that MagicMotion outperforms previous methods across various metrics.
<img src="assets/teaser.webp" width="100%" alt="Teaser Image">
## π£ Updates
- `2025/07/28` π₯π₯MagicData has been released [`here`](https://huggingface.co/datasets/quanhaol/MagicData). Welcome to use our dataset!
- `2025/06/26` π₯π₯MagicMotion has been accepted by ICCV2025!πππ
- `2025/03/28` π₯π₯We released interactive demo with gradio for MagicMotion.
- `2025/03/27` MagicMotion can now perform inference on a single 4090 GPU (with less than 24GB of GPU memory).
- `2025/03/21` π₯π₯We released MagicMotion, including inference code and model weights.
## π Table of Contents
- [π‘ Abstract](#-abstract)
- [π£ Updates](#-updates)
- [π Table of Contents](#-table-of-contents)
- [β
TODO List](#-todo-list)
- [π Installation](#-installation)
- [π¦ Model Weights](#-model-weights)
- [Folder Structure](#folder-structure)
- [Download Links](#download-links)
- [π Inference](#-inference)
- [Scripts](#scripts)
- [π₯οΈ Gradio Demo](#οΈ-gradio-demo)
- [π€ Acknowledgements](#-acknowledgements)
- [π Contact](#-contact)
## β
TODO List
- [x] Release our inference code and model weights
- [x] Release gradio demo
- [x] Release MagicData
- [ ] Release MagicBench and evaluation code
- [ ] Release our training code
## π Installation
```bash
# Clone this repository.
git clone https://github.com/quanhaol/MagicMotion
cd MagicMotion
# Install requirements
conda env create -n magicmotion --file environment.yml
conda activate magicmotion
pip install git+https://github.com/huggingface/diffusers
# Install Grounded_SAM2
cd trajectory_construction/Grounded_SAM2
pip install -e .
pip install --no-build-isolation -e grounding_dino
# Optional: For image editing
pip install git+https://github.com/huggingface/image_gen_aux
```
## π¦ Model Weights
### Folder Structure
```
MagicMotion
βββ ckpts
βββ stage1
β βββ mask.pt
βββ stage2
β βββ box.pt
β βββ box_perception_head.pt
βββ stage3
β βββ sparse_box.pt
β βββ sparse_box_perception_head.pt
```
### Download Links
```bash
pip install "huggingface_hub[hf_transfer]"
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download quanhaol/MagicMotion --local-dir ckpts
```
## π Inference
Inference requires **only 23GB of GPU memory** (tested on a single 24GB NVIDIA GeForce RTX 4090 GPU).
If you have sufficient GPU memory, you can modify `magicmotion/inference.py` to improve runtime performance:
```python
# Optimized setting (for GPUs with sufficient memory)
pipe.to("cuda")
# pipe.enable_sequential_cpu_offload()
```
> **Note**: Using the optimized setting can reduce runtime by up to 2x.
### Scripts
```bash
# Demo inference script of each stage (Input Image & Trajectory already provided)
bash magicmotion/scripts/inference/inference_mask.sh
bash magicmotion/scripts/inference/inference_box.sh
bash magicmotion/scripts/inference/inference_sparse_box.sh
# You an also construct trajectory for each stage by yourself -- See MagicMotion/trajectory_construction for more details
python trajectory_construction/plan_mask.py
python trajectory_construction/plan_box.py
python trajectory_construction/plan_sparse_box.py
# Optional: Use FLUX to generate input image by text-to-image generation or image editing -- See MagicMotion/first_frame_generation for more details
python first_frame_generation/t2i_flux.py
python first_frame_generation/edit_image_flux.py
```
## π₯οΈ Gradio Demo
Usage:
```bash
bash magicmotion/scripts/app/app.sh
```
<img src="assets/images/gradio/1.png" alt="Gradio Demo 1" style="width: 60%; border: 1px solid #ddd; border-radius: 4px; padding: 5px;"> <img src="assets/images/gradio/2.png" alt="Gradio Demo 2" style="width: 60%; border: 1px solid #ddd; border-radius: 4px; padding: 5px;">
## π€ Acknowledgements
We would like to express our gratitude to the following open-source projects that have been instrumental in the development of our project:
- [CogVideo](https://github.com/THUDM/CogVideo): An open source video generation framework by THUKEG.
- [Open-Sora](https://github.com/hpcaitech/Open-Sora): An open source video generation framework by HPC-AI Tech.
- [finetrainers](https://github.com/a-r-r-o-w/finetrainers): A Memory-optimized training library for diffusion models.
Special thanks to the contributors of these libraries for their hard work and dedication!
## π Contact
If you have any suggestions or find our work helpful, feel free to contact us
Email: liqh24@m.fudan.edu.cn or zhenxingfd@gmail.com
If you find our work useful, <b>please consider giving a star to this github repository and citing it</b>:
```bibtex
@article{li2025magicmotion,
title={MagicMotion: Controllable Video Generation with Dense-to-Sparse Trajectory Guidance},
author={Li, Quanhao and Xing, Zhen and Wang, Rui and Zhang, Hui and Dai, Qi and Wu, Zuxuan},
journal={arXiv preprint arXiv:2503.16421},
year={2025}
}
```
|