Add comprehensive model card for PatchDPO
#1
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,3 +1,229 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
--
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
pipeline_tag: text-to-image
|
| 4 |
+
library_name: diffusers
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# PatchDPO: Patch-level DPO for Finetuning-free Personalized Image Generation
|
| 8 |
+
|
| 9 |
+
This repository contains the official implementation of the paper "[PatchDPO: Patch-level DPO for Finetuning-free Personalized Image Generation](https://huggingface.co/papers/2412.03177)".
|
| 10 |
+
|
| 11 |
+
The code and full project details are available on GitHub: [https://github.com/hqhQAQ/PatchDPO](https://github.com/hqhQAQ/PatchDPO)
|
| 12 |
+
|
| 13 |
+
### Overview
|
| 14 |
+
|
| 15 |
+
Finetuning-free personalized image generation can synthesize customized images without test-time finetuning, attracting wide research interest owing to its high efficiency. Current finetuning-free methods simply adopt a single training stage with a simple image reconstruction task, and they typically generate low-quality images inconsistent with the reference images during test-time. To mitigate this problem, inspired by the recent DPO (i.e., direct preference optimization) technique, this work proposes an additional training stage to improve the pre-trained personalized generation models. However, traditional DPO only determines the overall superiority or inferiority of two samples, which is not suitable for personalized image generation because the generated images are commonly inconsistent with the reference images only in some local image patches. To tackle this problem, this work proposes **PatchDPO** that estimates the quality of image patches within each generated image and accordingly trains the model. To this end, PatchDPO first leverages the pre-trained vision model with a proposed self-supervised training method to estimate the patch quality. Next, PatchDPO adopts a weighted training approach to train the model with the estimated patch quality, which rewards the image patches with high quality while penalizing the image patches with low quality.
|
| 16 |
+
|
| 17 |
+
With PatchDPO, our model achieves **state-of-the-art** performance on personalized image generation, with **only 4 hours** of training time on 8 GPUs.
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
### 🔥🔥🔥 News!!
|
| 22 |
+
|
| 23 |
+
- 📰 [2024.12.05] Our paper is available at [arXiv](https://arxiv.org/abs/2412.03177).
|
| 24 |
+
- 🤗 [2024.12.05] Our model weights are available at [Hugging Face](https://huggingface.co/hqhQAQ/PatchDPO).
|
| 25 |
+
- 🚀 [2024.12.05] Training code is available [here](https://github.com/hqhQAQ/PatchDPO).
|
| 26 |
+
- 🚀 [2024.12.05] Inference code is available [here](https://github.com/hqhQAQ/PatchDPO).
|
| 27 |
+
- 🚀 [2024.12.05] Evaluation code is available [here](https://github.com/hqhQAQ/PatchDPO).
|
| 28 |
+
- 💬 [2024.12.05] Our preliminary work **MIP-Adapter** for multi-object personalized generation is available at [MIP-Adapter](https://github.com/hqhQAQ/MIP-Adapter).
|
| 29 |
+
- 💬 [2024.12.05] Our preliminary work **MS-Diffusion** for multi-object personalized generation is available at [MS-Diffusion](https://github.com/MS-Diffusion/MS-Diffusion).
|
| 30 |
+
- 💬 [2025.02.27] Our paper is accepted by CVPR 2025!
|
| 31 |
+
|
| 32 |
+
### Performance
|
| 33 |
+
|
| 34 |
+
#### Quantitative Comparison
|
| 35 |
+
|
| 36 |
+
We compare PatchDPO with other personalized image generation methods on the famous *DreamBench*.
|
| 37 |
+
Three metrics are used for evaluation on this benchmark: DINO, CLIP-I, and CLIP-T.
|
| 38 |
+
Note that CLIP-T evaluates the text alignment, and DINO, CLIP-I evaluate the image alignment.
|
| 39 |
+
|
| 40 |
+
The comparison results are demonstrated in Table 1 & 2 (the results of other methods are from their paper):
|
| 41 |
+
|
| 42 |
+
<div style="display: flex; justify-content: center;">
|
| 43 |
+
<img src="./assets/dreambench_performance.png" alt="DreamBench Performance" width="45%">
|
| 44 |
+
</div>
|
| 45 |
+
|
| 46 |
+
Detailedly, two evaluation settings are adopted in Table 1 & 2, respectively.
|
| 47 |
+
|
| 48 |
+
* **Table 1** uses the original setting following most existing methods. In this setting, DINO, CLIP-I are calculated by comparing the generated image and **all images of the same object**.
|
| 49 |
+
|
| 50 |
+
* **Table 2** uses the evaluation setting following [Kosmos-G](https://github.com/xichenpan/Kosmos-G). In this setting, only one image is preserved for each object, and DINO, CLIP-I are calculated by comparing the generated image and **only this image**.
|
| 51 |
+
|
| 52 |
+
#### Qualitative Comparison
|
| 53 |
+
|
| 54 |
+
Examples of generated images from PatchDPO are demonstrated below:
|
| 55 |
+
|
| 56 |
+

|
| 57 |
+
|
| 58 |
+
### Requirements
|
| 59 |
+
|
| 60 |
+
The Python packages required for this project are listed below:
|
| 61 |
+
|
| 62 |
+
```
|
| 63 |
+
torch==1.13.1
|
| 64 |
+
torchvision==0.14.1
|
| 65 |
+
diffusers==0.23.1
|
| 66 |
+
einops
|
| 67 |
+
modelscope
|
| 68 |
+
numpy==1.24.4
|
| 69 |
+
oss2
|
| 70 |
+
Pillow==10.1.0
|
| 71 |
+
PyYAML
|
| 72 |
+
safetensors
|
| 73 |
+
tqdm
|
| 74 |
+
imgviz
|
| 75 |
+
transformers==4.35.2
|
| 76 |
+
tensorboard
|
| 77 |
+
accelerate==0.23.0
|
| 78 |
+
opencv-python
|
| 79 |
+
openai-clip
|
| 80 |
+
setuptools==69.5.1
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
### Dataset
|
| 84 |
+
|
| 85 |
+
* **Training dataset.** The release of training dataset is in preparation.
|
| 86 |
+
|
| 87 |
+
* **Test dataset.** We evaluate our model on the famous **DreamBench**.
|
| 88 |
+
|
| 89 |
+
Prepare this dataset by downloading the `dataset` folder ([DreamBooth dataset](https://github.com/google/dreambooth/tree/main/dataset)), and placing it in the `dreambench` folder of this project.
|
| 90 |
+
Finally, the file structure of the `dreambench` folder is as below:
|
| 91 |
+
|
| 92 |
+
```
|
| 93 |
+
--dreambench
|
| 94 |
+
|--dataset
|
| 95 |
+
|--backpack
|
| 96 |
+
|--backpack_dog
|
| 97 |
+
|--bear_plushie
|
| 98 |
+
|--berry_bowl
|
| 99 |
+
|...
|
| 100 |
+
|--json_data
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
### Pre-trained models
|
| 104 |
+
|
| 105 |
+
* **Base model.** PatchDPO is based on the SDXL model, which is required for both training and inference. Prepare this model by downloading the pre-trained weights from Hugging Face:
|
| 106 |
+
|
| 107 |
+
* [SG161222/RealVisXL_V1.0](https://huggingface.co/SG161222/RealVisXL_V1.0)
|
| 108 |
+
|
| 109 |
+
* **Training.** PatchDPO is trained based on the IP-Adapter-Plus model. Prepare this model by downloading the pre-trained weights from Hugging Face:
|
| 110 |
+
|
| 111 |
+
* [h94/IP-Adapter](https://huggingface.co/h94/IP-Adapter)
|
| 112 |
+
|
| 113 |
+
* **Inference and evaluation.** Our trained PatchDPO model can be downloaded from:
|
| 114 |
+
|
| 115 |
+
* [hqhQAQ/PatchDPO](https://huggingface.co/hqhQAQ/PatchDPO)
|
| 116 |
+
|
| 117 |
+
The path of these models will be `/PATH/TO/RealVisXL_V1.0/`, `/PATH/TO/IP-Adapter/`, and `/PATH/TO/PatchDPO/` respectively.
|
| 118 |
+
|
| 119 |
+
### Training
|
| 120 |
+
|
| 121 |
+
Run the following script for training the PatchDPO model based on the IP-Adapter-Plus model using the patchdpo dataset. (note that `/PATH/TO` in `--pretrained_model_name_or_path`, `--image_encoder_path`, `--pretrained_ip_adapter_path`, `--data_root_path`, and `patch_quality_file` should be changed to your own path, and 8 GPUs are used here).
|
| 122 |
+
|
| 123 |
+
```bash
|
| 124 |
+
accelerate launch --num_processes 8 --multi_gpu --mixed_precision "fp16" train_patchdpo.py \
|
| 125 |
+
--pretrained_model_name_or_path /PATH/TO/RealVisXL_V1.0/ \
|
| 126 |
+
--image_encoder_path /PATH/TO/IP-Adapter/models/image_encoder/ \
|
| 127 |
+
--pretrained_ip_adapter_path /PATH/TO/IP-Adapter/sdxl_models/ip-adapter-plus_sdxl_vit-h.bin \
|
| 128 |
+
--data_root_path /PATH/TO/patchdpo_dataset/ \
|
| 129 |
+
--patch_quality_file /PATH/TO/patchdpo_dataset/patch_quality.pkl \
|
| 130 |
+
--mixed_precision fp16 \
|
| 131 |
+
--resolution 512 \
|
| 132 |
+
--train_batch_size 4 \
|
| 133 |
+
--dataloader_num_workers 4 \
|
| 134 |
+
--learning_rate 3e-05 \
|
| 135 |
+
--weight_decay 0.01 \
|
| 136 |
+
--save_steps 10000 \
|
| 137 |
+
--stop_step 30000 \
|
| 138 |
+
--output_dir output/exp1/ \
|
| 139 |
+
--use_dpo_loss True
|
| 140 |
+
```
|
| 141 |
+
|
| 142 |
+
### Inference
|
| 143 |
+
|
| 144 |
+
Run the following scripts to conduct inference on *DreamBench*.
|
| 145 |
+
Note that `/PATH/TO/model.bin` is the path of the trained model to be evaluated.
|
| 146 |
+
|
| 147 |
+
* Inference using the **original setting** using 2 GPUs:
|
| 148 |
+
|
| 149 |
+
```bash
|
| 150 |
+
accelerate launch --num_processes 2 --multi_gpu --mixed_precision "fp16" inference_dreambooth.py \
|
| 151 |
+
--base_model_path /PATH/TO/RealVisXL_V1.0/ \
|
| 152 |
+
--image_encoder_path /PATH/TO/IP-Adapter/models/image_encoder/ \
|
| 153 |
+
--ip_ckpt /PATH/TO/PatchDPO/model.bin \
|
| 154 |
+
--data_root dreambench \
|
| 155 |
+
--output_dir output/exp1_eval/ \
|
| 156 |
+
--scale 0.78 \
|
| 157 |
+
--is_kosmosg False
|
| 158 |
+
```
|
| 159 |
+
|
| 160 |
+
* Inference using the **Kosmos-G setting** using 2 GPUs:
|
| 161 |
+
|
| 162 |
+
```bash
|
| 163 |
+
accelerate launch --num_processes 2 --multi_gpu --mixed_precision "fp16" inference_dreambooth.py \
|
| 164 |
+
--base_model_path /PATH/TO/RealVisXL_V1.0/ \
|
| 165 |
+
--image_encoder_path /PATH/TO/IP-Adapter/models/image_encoder/ \
|
| 166 |
+
--ip_ckpt /PATH/TO/PatchDPO/model.bin \
|
| 167 |
+
--data_root dreambench \
|
| 168 |
+
--output_dir output/exp1_eval_kosmosg/ \
|
| 169 |
+
--scale 0.65 \
|
| 170 |
+
--is_kosmosg True
|
| 171 |
+
```
|
| 172 |
+
|
| 173 |
+
### Evaluation
|
| 174 |
+
|
| 175 |
+
We concisely merge the original evaluation setting and the Kosmos-G setting into a single script (`evaluate_dreambooth.py`) for DreamBench evaluation, making it convenient for future researchers to use.
|
| 176 |
+
|
| 177 |
+
Two steps for running this script:
|
| 178 |
+
|
| 179 |
+
* First, generate the images into a folder `$output_dir` in the way you like, **as long as** it is in the following format:
|
| 180 |
+
|
| 181 |
+
```
|
| 182 |
+
--$output_dir
|
| 183 |
+
|--backpack
|
| 184 |
+
|--a backpack floating in an ocean of milk.png
|
| 185 |
+
|--a backpack floating on top of water.png
|
| 186 |
+
|--a backpack in the jungle.png
|
| 187 |
+
|...
|
| 188 |
+
|--backpack_dog
|
| 189 |
+
|--bear_plushie
|
| 190 |
+
|--berry_bowl
|
| 191 |
+
|...
|
| 192 |
+
```
|
| 193 |
+
|
| 194 |
+
Detailedly, `$output_dir` contains 30 subfolders (corresponding to 30 objects), and each subfolder saves the generated images for each object, which is also named with this object (*i.e.*, the folder names are consistent with those in [dreambench/dataset](https://github.com/google/dreambooth/tree/main/dataset)).
|
| 195 |
+
|
| 196 |
+
Each subfolder contains 25 images (corresponding to 25 prompts for each object), and each image is named with the corresponding prompt.
|
| 197 |
+
|
| 198 |
+
* Next, run the following scripts for evaluation in two settings:
|
| 199 |
+
|
| 200 |
+
* **Original setting:**
|
| 201 |
+
|
| 202 |
+
```bash
|
| 203 |
+
python evaluate_dreambooth.py \
|
| 204 |
+
--output_dir $output_dir \
|
| 205 |
+
--data_root dreambench \
|
| 206 |
+
--is_kosmosg False
|
| 207 |
+
```
|
| 208 |
+
|
| 209 |
+
* **Kosmos-G setting:**
|
| 210 |
+
|
| 211 |
+
```bash
|
| 212 |
+
python evaluate_dreambooth.py \
|
| 213 |
+
--output_dir $output_dir \
|
| 214 |
+
--data_root dreambench \
|
| 215 |
+
--is_kosmosg True
|
| 216 |
+
```
|
| 217 |
+
|
| 218 |
+
## Citation
|
| 219 |
+
|
| 220 |
+
If you find our work helpful or inspiring, please feel free to cite it.
|
| 221 |
+
|
| 222 |
+
```bibtex
|
| 223 |
+
@article{zhou2024patchdpo,
|
| 224 |
+
title={PatchDPO: Patch-level DPO for Finetuning-free Personalized Image Generation},
|
| 225 |
+
author={Zhou, Zijian and Liu, Shikun and Han, Xiao and Liu, Haozhe and Ng, Kam Woh and Xie, Tian and Cong, Yuren and Li, Hang and Xu, Mengmeng and P{\'e}rez-R{\'u}a, Juan-Manuel and Patel, Aditya and Xiang, Tao and Shi, Miaojing and He, Sen},
|
| 226 |
+
journal={arXiv preprint arXiv:2412.03177},
|
| 227 |
+
year={2024},
|
| 228 |
+
}
|
| 229 |
+
```
|