Upload README.md
Browse files
README.md
CHANGED
|
@@ -1,170 +1,12 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
</a>
|
| 11 |
-
<a href='https://huggingface.co/zhengchong/FastFit-MR-1024' style="margin: 0 2px; text-decoration: none;">
|
| 12 |
-
<img src='https://img.shields.io/badge/Hugging Face-ckpts-orange?style=flat&logo=HuggingFace&logoColor=orange' alt='huggingface'>
|
| 13 |
-
</a>
|
| 14 |
-
<a href="https://github.com/Zheng-Chong/FastFit" style="margin: 0 2px; text-decoration: none;">
|
| 15 |
-
<img src='https://img.shields.io/badge/GitHub-Repo-blue?style=flat&logo=GitHub' alt='GitHub'>
|
| 16 |
-
</a>
|
| 17 |
-
<a href="https://fastfit.lavieai.com" style="margin: 0 2px; text-decoration: none;">
|
| 18 |
-
<img src='https://img.shields.io/badge/Demo-Gradio-gold?style=flat&logo=Gradio&logoColor=red' alt='Demo'>
|
| 19 |
-
</a>
|
| 20 |
-
<a href="https://github.com/Zheng-Chong/FastFit/tree/main" style="margin: 0 2px; text-decoration: none;">
|
| 21 |
-
<img src='https://img.shields.io/badge/License-NonCommercial-lightgreen?style=flat&logo=Lisence' alt='License'>
|
| 22 |
-
</a>
|
| 23 |
-
</div>
|
| 24 |
-
|
| 25 |
-
<br>
|
| 26 |
-
|
| 27 |
-
FastFit is a diffusion-based framework optimized for **high-speed**, **multi-reference virtual try-on**. It enables **simultaneous try-on of multiple fashion items**βsuch as **tops, bottoms, dresses, shoes, and bags**βon a single person. The framework leverages **reference KV caching** during inference to **significantly accelerate generation**.
|
| 28 |
-
|
| 29 |
-
## Updates
|
| 30 |
-
- **`2025/08/29`**: π We release the [arXiv paper](https://arxiv.org/abs/2508.20586) of FastFit!
|
| 31 |
-
- **`2025/08/06`**: βοΈ We release [the code for inference and evaluation](https://github.com/Zheng-Chong/FastFit/tree/main?tab=readme-ov-file#inference--evaluation-on-datasets) on the [DressCode-MR](https://huggingface.co/datasets/zhengchong/DressCode-MR), [DressCode](https://huggingface.co/datasets/zhengchong/DressCode-Test), and [VITON-HD](https://huggingface.co/datasets/zhengchong/VITON-HD) test datasets.
|
| 32 |
-
- **`2025/08/05`**: π§© We release the [ComfyUI workflow](https://github.com/Zheng-Chong/FastFit/releases/tag/comfyui) for FastFit!
|
| 33 |
-
- **`2025/08/04`**: π Our [gradio demo](https://fastfit.lavieai.com) is online with Chinese & English support! The code of the demo is also released in [app.py](app.py).
|
| 34 |
-
- **`2025/07/03`**: π We release the weights of [FastFit-MR](https://huggingface.co/zhengchong/FastFit-MR-1024) and [FastFit-SR](https://huggingface.co/zhengchong/FastFit-SR-1024) model on Hugging Face!
|
| 35 |
-
- **`2025/06/24`**: π We release [DressCode-MR](https://huggingface.co/datasets/zhengchong/DressCode-MR) dataset with **28K+ Multi-reference virtual try-on Samples** on Hugging Face!
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
## DressCode-MR Dataset
|
| 39 |
-
|
| 40 |
-
<div align="center">
|
| 41 |
-
<img src="assets/img/dataset.png" alt="DressCode-MR Dataset" width="800">
|
| 42 |
-
</div>
|
| 43 |
-
|
| 44 |
-
[DressCode-MR](https://huggingface.co/datasets/zhengchong/DressCode-MR) is constructed based on the [DressCode](https://github.com/aimagelab/dress-code) dataset with **28K+ Multi-reference virtual try-on Samples**.
|
| 45 |
-
|
| 46 |
-
- **Multi-reference Samples**: Each sample comprises a person's image paired with a set of compatible clothing and accessory items: tops, bottoms, dresses, shoes, and bags.
|
| 47 |
-
- **Large Scale**: Contains a total of 28,179 high-quality multi-reference samples with 25,779 for training and 2,400 for testing.
|
| 48 |
-
|
| 49 |
-
DressCode-MR is released under the exact same license as the original DressCode dataset. Therefore, before requesting access to DressCode-MR dataset, you must complete the following steps:
|
| 50 |
-
|
| 51 |
-
1. Apply and be granted a license to use the [DressCode](https://github.com/aimagelab/dress-code) dataset.
|
| 52 |
-
2. Use your educational/academic email address (e.g., one ending in .edu, .ac, etc.) to request access to [DressCode-MR](https://huggingface.co/datasets/zhengchong/DressCode-MR) on Hugging Face. (Any requests from non-academic email addresses will be rejected.)
|
| 53 |
-
|
| 54 |
-
## Installation
|
| 55 |
-
|
| 56 |
-
```powershell
|
| 57 |
-
conda create -n fastfit python=3.10
|
| 58 |
-
conda activate fastfit
|
| 59 |
-
pip install -r requirements.txt
|
| 60 |
-
pip install easy-dwpose --no-dependencies # to resolve the version conflict
|
| 61 |
-
|
| 62 |
-
# if error occurs for av, try:
|
| 63 |
-
conda install -c conda-forge av
|
| 64 |
-
```
|
| 65 |
-
|
| 66 |
-
## ComfyUI Workflow
|
| 67 |
-
|
| 68 |
-
<div align="center">
|
| 69 |
-
<img src="assets/img/comfyui.png" alt="ComfyUI Workflow" width="800">
|
| 70 |
-
</div>
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
1. Clone the FastFit repository into your `ComfyUI/custom_nodes/` directory.
|
| 74 |
-
```powershell
|
| 75 |
-
cd Your_ComfyUI_Dir/custom_nodes
|
| 76 |
-
git clone https://github.com/Zheng-Chong/FastFit.git
|
| 77 |
-
```
|
| 78 |
-
|
| 79 |
-
2. Install the required dependencies.
|
| 80 |
-
```powershell
|
| 81 |
-
cd FastFit
|
| 82 |
-
pip install -r requirements.txt
|
| 83 |
-
pip install easy-dwpose --no-dependencies # to resolve the version conflict
|
| 84 |
-
|
| 85 |
-
# if error occurs for av, try:
|
| 86 |
-
conda install -c conda-forge av
|
| 87 |
-
```
|
| 88 |
-
|
| 89 |
-
3. Install [rgthree-comfy](https://github.com/rgthree/rgthree-comfy) for image comparer.
|
| 90 |
-
|
| 91 |
-
```powershell
|
| 92 |
-
cd Your_ComfyUI_Dir/custom_nodes
|
| 93 |
-
git clone https://github.com/rgthree/rgthree-comfy.git
|
| 94 |
-
cd rgthree-comfy
|
| 95 |
-
pip install -r requirements.txt
|
| 96 |
-
```
|
| 97 |
-
|
| 98 |
-
4. Restart ComfyUI.
|
| 99 |
-
5. Drag and drop the [fastfit_workflow.json](https://github.com/Zheng-Chong/FastFit/blob/main/assets/fastfit_workflow.json) file onto the ComfyUI web interface.
|
| 100 |
-
|
| 101 |
-
## Gradio Demo
|
| 102 |
-
|
| 103 |
-
The model weights will be automatically downloaded from Hugging Face when you run the demo.
|
| 104 |
-
|
| 105 |
-
```bash
|
| 106 |
-
python app.py
|
| 107 |
-
```
|
| 108 |
-
|
| 109 |
-
## Inference & Evaluation on Datasets
|
| 110 |
-
|
| 111 |
-
To perform inference on the [DressCode-MR](https://huggingface.co/datasets/zhengchong/DressCode-MR), [DressCode](https://huggingface.co/datasets/zhengchong/DressCode-Test), or [VITON-HD](https://huggingface.co/datasets/zhengchong/VITON-HD) test datasets, use the `infer_datasets.py` script, for example:
|
| 112 |
-
|
| 113 |
-
```bash
|
| 114 |
-
python infer_datasets.py \
|
| 115 |
-
--dataset <dataset_name> \
|
| 116 |
-
--data_dir </path/to/your/dataset> \
|
| 117 |
-
--batch_size 4 \
|
| 118 |
-
--num_inference_steps 50 \
|
| 119 |
-
--guidance_scale 2.5 \
|
| 120 |
-
--mixed_precision bf16 \
|
| 121 |
-
--paired
|
| 122 |
-
```
|
| 123 |
-
|
| 124 |
-
- `--dataset`: Specify the target dataset. Choose from `dresscode-mr`, `dresscode`, or `viton-hd`.
|
| 125 |
-
|
| 126 |
-
- `--data_dir`: The root directory path for the specified dataset.
|
| 127 |
-
|
| 128 |
-
- `--paired`: Include this flag to run inference in the paired setting. Omit this flag for the unpaired setting.
|
| 129 |
-
|
| 130 |
-
By default, inference results will be saved to the `results/` directory at the project root.
|
| 131 |
-
|
| 132 |
---
|
| 133 |
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
```bash
|
| 137 |
-
python eval.py \
|
| 138 |
-
--gt_folder </path/to/ground_truth_folder> \
|
| 139 |
-
--pred_folder </path/to/prediction_folder> \
|
| 140 |
-
--paired \
|
| 141 |
-
--batch_size 16 \
|
| 142 |
-
--num_workers 4
|
| 143 |
-
```
|
| 144 |
-
|
| 145 |
-
- `--gt_folder`: The directory path containing the ground truth images.
|
| 146 |
-
|
| 147 |
-
- `--pred_folder`: The directory path containing the generated (predicted) images from the inference step.
|
| 148 |
-
|
| 149 |
-
- `--paired`: Include this flag to evaluate results from the paired setting. Omit this flag for the unpaired setting.
|
| 150 |
-
|
| 151 |
-
## Citation
|
| 152 |
-
|
| 153 |
-
```bibtex
|
| 154 |
-
@misc{chong2025fastfitacceleratingmultireferencevirtual,
|
| 155 |
-
title={FastFit: Accelerating Multi-Reference Virtual Try-On via Cacheable Diffusion Models},
|
| 156 |
-
author={Zheng Chong and Yanwei Lei and Shiyue Zhang and Zhuandi He and Zhen Wang and Xujie Zhang and Xiao Dong and Yiling Wu and Dongmei Jiang and Xiaodan Liang},
|
| 157 |
-
year={2025},
|
| 158 |
-
eprint={2508.20586},
|
| 159 |
-
archivePrefix={arXiv},
|
| 160 |
-
primaryClass={cs.CV},
|
| 161 |
-
url={https://arxiv.org/abs/2508.20586},
|
| 162 |
-
}
|
| 163 |
-
```
|
| 164 |
-
|
| 165 |
-
## Acknowledgement
|
| 166 |
-
Our code is modified based on [Diffusers](https://github.com/huggingface/diffusers). We adopt [Stable Diffusion v1.5 inpainting](https://huggingface.co/runwayml/stable-diffusion-inpainting) as the base model. We use a modified [AutoMasker](https://github.com/Zheng-Chong/CatVTON/blob/edited/model/cloth_masker.py) to automatically generate masks in our [Gradio](https://github.com/gradio-app/gradio) App and [ComfyUI](https://github.com/comfyanonymous/ComfyUI) workflow. Thanks to all the contributors!
|
| 167 |
-
|
| 168 |
-
## License
|
| 169 |
-
|
| 170 |
-
All weights, parameters, and code related to FastFit are governed by the [FastFit Non-Commercial License](https://github.com/Zheng-Chong/FastFit/tree/main). For commercial collaboration, please contact [LavieAI](https://lavieai.com/) or [LoomlyAI](https://www.loomlyai.com/en).
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: My Fastfit Api
|
| 3 |
+
emoji: π
|
| 4 |
+
colorFrom: indigo
|
| 5 |
+
colorTo: red
|
| 6 |
+
sdk: gradio
|
| 7 |
+
sdk_version: 6.2.0
|
| 8 |
+
app_file: app.py
|
| 9 |
+
pinned: false
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|