Update README.md
Browse files
README.md
CHANGED
|
@@ -1,11 +1,146 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
<
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
</div>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# MagicTryOn: Harnessing Diffusion Transformer for Garment-Preserving Video Virtual Try-on
|
| 2 |
+
|
| 3 |
+
<a href="https://arxiv.org/abs/2505.21325v2"><img src='https://img.shields.io/badge/arXiv-2501.11325-red?style=flat&logo=arXiv&logoColor=red' alt='arxiv'></a>
|
| 4 |
+
<a href="https://vivocameraresearch.github.io/magictryon/"><img src='https://img.shields.io/badge/Project-Page-Green' alt='GitHub'></a>
|
| 5 |
+
<a href="http://www.apache.org/licenses/LICENSE-2.0"><img src='https://img.shields.io/badge/License-CC BY--NC--SA--4.0-lightgreen?style=flat&logo=Lisence' alt='License'></a>
|
| 6 |
+
|
| 7 |
+
|
| 8 |
+
**MagicTryOn** is a video virtual try-on framework based on a large-scale video diffusion Transformer. ***1) It adopts Wan2.1 diffusion Transformer as the backbone*** and ***2) employs full self-attention to model spatiotemporal consistency***. ***3) A coarse-to-fine garment preservation strategy is introduced, along with a mask-aware loss to enhance garment region fidelity***.
|
| 9 |
+
<div align="center">
|
| 10 |
+
<img src="asset/model.png" width="100%" height="100%"/>
|
| 11 |
+
</div>
|
| 12 |
+
|
| 13 |
+
## Updates
|
| 14 |
+
- **`2025/06/06`**: π We are excited to announce that the ***code and weights*** of [**MagicTryOn**](https://github.com/vivoCameraResearch/Magic-TryOn/) have been released! Check it out! You can download the weights from π€[**HuggingFace**](https://huggingface.co/LuckyLiGY/MagicTryOn).
|
| 15 |
+
- **`2025/05/27`**: Our [**Paper on ArXiv**](https://arxiv.org/abs/2505.21325v2) is available π₯³!
|
| 16 |
+
|
| 17 |
+
## To-Do List for MagicTryOn Release
|
| 18 |
+
- β
Release the source code
|
| 19 |
+
- β
Release the inference demo and pretrained weights
|
| 20 |
+
- β
Release the customized try-on utilities
|
| 21 |
+
- [ ] Release the testing scripts
|
| 22 |
+
- [ ] Release the training scripts
|
| 23 |
+
- [ ] Release the second version of the pretrained model weights
|
| 24 |
+
- [ ] Update Gradio App.
|
| 25 |
+
|
| 26 |
+
## Installation
|
| 27 |
+
|
| 28 |
+
Create a conda environment & Install requirments
|
| 29 |
+
```shell
|
| 30 |
+
# python==3.12.9 cuda==12.3 torch==2.2
|
| 31 |
+
conda create -n magictryon python==3.12.9
|
| 32 |
+
conda activate magictryon
|
| 33 |
+
pip install -r requirements.txt
|
| 34 |
+
# or
|
| 35 |
+
conda env create -f environment.yaml
|
| 36 |
+
```
|
| 37 |
+
If you encounter an error while installing Flash Attention, please [**manually download**](https://github.com/Dao-AILab/flash-attention/releases) the installation package based on your Python version, CUDA version, and Torch version, and install it using ***pip install***.
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
## Demo Inference
|
| 41 |
+
### 1. Image TryOn
|
| 42 |
+
You can directly run the following command to perform image try-on. If you want to modify some inference parameters, please make the changes inside the ***predict_image_tryon_up.py*** file.
|
| 43 |
+
```PowerShell
|
| 44 |
+
CUDA_VISIBLE_DEVICES=0 python predict_image_tryon_up.py
|
| 45 |
+
|
| 46 |
+
CUDA_VISIBLE_DEVICES=1 python predict_image_tryon_low.py
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
### 2. Video TryOn
|
| 50 |
+
You can directly run the following command to perform image try-on. If you want to modify some inference parameters, please make the changes inside the ***predict_video_tryon_up.py*** file.
|
| 51 |
+
```PowerShell
|
| 52 |
+
CUDA_VISIBLE_DEVICES=0 python predict_video_tryon_up.py
|
| 53 |
+
|
| 54 |
+
CUDA_VISIBLE_DEVICES=1 python predict_video_tryon_low.py
|
| 55 |
+
```
|
| 56 |
+
|
| 57 |
+
### 3. Customize TryOn
|
| 58 |
+
Before performing customized try-on, you need to complete the following five steps to obtain:
|
| 59 |
+
|
| 60 |
+
1. **Cloth Caption**
|
| 61 |
+
Generate a descriptive caption for the garment, which may be used for conditioning or multimodal control. We use [**Qwen/Qwen2.5-VL-7B-Instruct**](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) to obtain the caption. Before running, you need to specify the folder path.
|
| 62 |
+
```PowerShell
|
| 63 |
+
python inference/customize/get_garment_caption.py
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
2. **Cloth Line Map**
|
| 67 |
+
Extract the structural lines or sketch of the garment using [**AniLines-Anime-Lineart-Extractor**](https://github.com/zhenglinpan/AniLines-Anime-Lineart-Extractor).
|
| 68 |
+
|
| 69 |
+
```PowerShell
|
| 70 |
+
cd inference/customize/AniLines
|
| 71 |
+
python infer.py --dir_in datasets/garment/vivo/vivo_garment --dir_out datasets/garment/vivo/vivo_garment_anilines --mode detail --binarize -1 --fp16 True --device cuda:1
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
3. **Mask**
|
| 75 |
+
Generate the agnostic mask of the garment, which is essential for region control during try-on. Please [**download**]() the required checkpoint for obtaining the agnostic mask. The checkpoint needs to be placed in the ***inference/customize/gen_mask/ckpt*** folder.
|
| 76 |
+
|
| 77 |
+
(1) You need to rename your video to ***video.mp4***, and then construct the folders according to the following directory structure.
|
| 78 |
+
```
|
| 79 |
+
βββ datasets
|
| 80 |
+
β βββ person
|
| 81 |
+
| | βββ customize
|
| 82 |
+
β β β βββ video
|
| 83 |
+
β β β β βββ 00001
|
| 84 |
+
β β β β β βββ video.mp4
|
| 85 |
+
| | | | βββ 00002 ...
|
| 86 |
+
β β β βββ image
|
| 87 |
+
β β β β βββ 00001
|
| 88 |
+
β β β β β β βββ images
|
| 89 |
+
β β β β β β β βββ 0000.png
|
| 90 |
+
| | | | βββ 00002 ...
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
(2) Using ***video2image.py*** to convert the video into image frames and save them to ***00001/images***.
|
| 94 |
+
|
| 95 |
+
(3) Run the following command to obtain the agnostic mask.
|
| 96 |
+
|
| 97 |
+
```PowerShell
|
| 98 |
+
cd inference/customize/gen_mask
|
| 99 |
+
python app_mask.py
|
| 100 |
+
# if extract the mask for lower_body or dresses, please modify line 65.
|
| 101 |
+
# if lower_body:
|
| 102 |
+
# mask, _ = get_mask_location('dc', "lower_body", model_parse, keypoints)
|
| 103 |
+
# if dresses:
|
| 104 |
+
# mask, _ = get_mask_location('dc', "dresses", model_parse, keypoints)
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
After completing the above steps, you will obtain the agnostic masks for all video frames in the ***00001/masks*** folder.
|
| 108 |
+
4. **Agnostic Representation**
|
| 109 |
+
Construct an agnostic representation of the person by removing garment-specific features. You can directly run ***get_masked_person.py*** to obtain the Agnostic Representation. Make sure to modify the ***image_folder*** and ***mask_folder*** parameters. The resulting video frames will be stored in ***00001/agnostic***.
|
| 110 |
+
|
| 111 |
+
5. **DensePose**
|
| 112 |
+
Use DensePose to obtain UV-mapped dense human body coordinates for better spatial alignment.
|
| 113 |
+
|
| 114 |
+
(1) Install [**detectron2**](https://github.com/facebookresearch/detectron2).
|
| 115 |
+
|
| 116 |
+
(2) Run the following command:
|
| 117 |
+
```PowerShell
|
| 118 |
+
cd inference/customize/detectron2/projects/DensePose
|
| 119 |
+
bash run.sh
|
| 120 |
+
```
|
| 121 |
+
(3) The generated results will be stored in the ***00001/image-densepose*** folder.
|
| 122 |
+
|
| 123 |
+
After completing the above steps, run the ***image2video.py*** file to generate the required customized videos: ***mask.mp4***, ***agnostic.mp4***, and ***densepose.mp4***. Then, run the following command:
|
| 124 |
+
```PowerShell
|
| 125 |
+
CUDA_VISIBLE_DEVICES=0 python predict_video_tryon_customize.py
|
| 126 |
+
```
|
| 127 |
+
|
| 128 |
+
## Acknowledgement
|
| 129 |
+
Our code is modified based on [VideoX-Fun](https://github.com/aigc-apps/VideoX-Fun/tree/main). We adopt [Wan2.1-I2V-14B](https://github.com/Wan-Video/Wan2.1) as the base model. We use [SCHP](https://github.com/GoGoDuck912/Self-Correction-Human-Parsing/tree/master) and [openpose](https://github.com/CMU-Perceptual-Computing-Lab/openpose) to generate masks. We use [detectron2](https://github.com/facebookresearch/detectron2) to generate densepose. Thanks to all the contributors!
|
| 130 |
+
|
| 131 |
+
## License
|
| 132 |
+
All the materials, including code, checkpoints, and demo, are made available under the [Creative Commons BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. You are free to copy, redistribute, remix, transform, and build upon the project for non-commercial purposes, as long as you give appropriate credit and distribute your contributions under the same license.
|
| 133 |
+
|
| 134 |
+
## Citation
|
| 135 |
+
|
| 136 |
+
```bibtex
|
| 137 |
+
@misc{li2025magictryon,
|
| 138 |
+
title={MagicTryOn: Harnessing Diffusion Transformer for Garment-Preserving Video Virtual Try-on},
|
| 139 |
+
author={Guangyuan Li and Siming Zheng and Hao Zhang and Jinwei Chen and Junsheng Luan and Binkai Ou and Lei Zhao and Bo Li and Peng-Tao Jiang},
|
| 140 |
+
year={2025},
|
| 141 |
+
eprint={2505.21325},
|
| 142 |
+
archivePrefix={arXiv},
|
| 143 |
+
primaryClass={cs.CV},
|
| 144 |
+
url={https://arxiv.org/abs/2505.21325},
|
| 145 |
+
}
|
| 146 |
+
```
|