Datasets:

ArXiv:
File size: 7,845 Bytes
9f3bc09
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233

# T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional Text-to-image Generation
Kaiyi Huang<sup>1</sup>, Kaiyue Sun<sup>1</sup>, Enze Xie<sup>2</sup>, Zhenguo Li<sup>2</sup>, and Xihui Liu<sup>1</sup>.

**<sup>1</sup>The University of Hong Kong, <sup>2</sup>Huawei Noah’s Ark Lab**

<a href='https://karine-h.github.io/T2I-CompBench/'><img src='https://img.shields.io/badge/Project-Page-Green'></a>
<a href='https://arxiv.org/pdf/2307.06350.pdf'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a> 
<a href='https://connecthkuhk-my.sharepoint.com/:f:/g/personal/huangky_connect_hku_hk/Er_BhrcMwGREht6gnKGIErMBx8H8yRXLDfWgWQwKaObQ4w?e=YzT5wG'><img src='https://img.shields.io/badge/Dataset-T2I--CompBench-blue'></a> 

## 🚩 **New Features/Updates**
- ✅ Dec. 02, 2023. Release the inference code for generating images in metric evaluation.
- ✅ Oct. 20, 2023. 💥 Evaluation metric adopted by 🧨 [**DALL-E 3**](https://cdn.openai.com/papers/dall-e-3.pdf) as the evaluation metric for compositionality.
- ✅ Sep. 30, 2023. 💥 Evaluation metric adopted by 🧨 [**PixArt-α**](https://arxiv.org/pdf/2310.00426.pdf) as the evaluation metric for compositionality.
- ✅ Sep. 22, 2023. 💥 Paper accepted to Neurips 2023.
- ✅ Jul. 9, 2023. Release the dataset, training and evaluation code.
- [ ] Human evaluation of image-score pairs


## **Installing the dependencies**

Before running the scripts, make sure to install the library's training dependencies:

**Important**

We recommend using the **latest code** to ensure consistency with the results presented in the paper. To make sure you can successfully run the example scripts, execute the following steps in a new virtual environment.
We use the **diffusers version** as **0.15.0.dev0**
You can either install the development version from PyPI: 
```bash

pip install diffusers==0.15.0.dev0

```
or install from the provided source:
```bash

unzip diffusers.zip

cd diffusers

pip install .

```

Then cd in the example folder  and run
```bash

pip install -r requirements.txt

```

And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:

```bash

accelerate config

```


## **Finetuning**
1. LoRA finetuning

Use LoRA finetuning method, please refer to the link for downloading "lora_diffusion" directory: 



```

https://github.com/cloneofsimo/lora/tree/master

```

2. Example usage





```

export project_dir=/T2I-CompBench
cd $project_dir



export train_data_dir="examples/samples/"

export output_dir="examples/output/"
export reward_root="examples/reward/"

export dataset_root="examples/dataset/color.txt"
export script=GORS_finetune/train_text_to_image.py

accelerate launch --multi_gpu --mixed_precision=fp16 \
--num_processes=8 --num_machines=1 \
--dynamo_backend=no "${script}" \

--train_data_dir="${train_data_dir}" \

--output_dir="${output_dir}" \

--reward_root="${reward_root}" \

--dataset_root="${dataset_root}"



```

or run

```

cd T2I-CompBench

bash GORS_finetune/train.sh
```









The image directory should be a directory containing the images, e.g.,





```
examples/samples/
        ├── a green bench and a blue bowl_000000.png

        ├── a green bench and a blue bowl_000001.png

        └──...


```

The reward directory should include a json file named "vqa_result.json", and the json file should be a dictionary that maps from

`{"question_id", "answer"}`, e.g.,



```
[{"question_id": 0, "answer": "0.7110"},

 {"question_id": 1, "answer": "0.7110"},
 ...]
```



The dataset should be placed in the directory "examples/dataset/".





## **Evaluation**

1. Install the requirements



MiniGPT4 is based on the repository, please refer to the link for environment dependencies and weights: 

```
https://github.com/Vision-CAIR/MiniGPT-4
```



2. Example usage



For evaluation, the input images files are stored in the directory "examples/samples/", with the format the same as the training data.



#### BLIP-VQA:

```
export project_dir="BLIPvqa_eval/"
cd $project_dir

out_dir="examples/"
python BLIP_vqa.py --out_dir=$out_dir

```

or run

```

cd T2I-CompBench

bash BLIPvqa_eval/test.sh
```

The output files are formatted as a json file named "vqa_result.json" in "examples/annotation_blip/" directory.



#### UniDet:



download weight and put under repo experts/expert_weights:

```
mkdir -p UniDet_eval/experts/expert_weights
cd UniDet_eval/experts/expert_weights
wget https://huggingface.co/shikunl/prismer/resolve/main/expert_weights/Unified_learned_OCIM_RS200_6x%2B2x.pth

```





```

export project_dir=UniDet_eval

cd $project_dir

python determine_position_for_eval.py

```

To calculate prompts from the **"complex" category**, set the **"--complex" parameter to True**; otherwise, set it to False. 

The output files are formatted as a json file named "vqa_result.json" in "examples/labels/annotation_obj_detection" directory.

#### CLIPScore:
```

outpath="examples/"

python CLIPScore_eval/CLIP_similarity.py --outpath=${outpath}

```
or run
```

cd T2I-CompBench

bash CLIPScore_eval/test.sh

```
To calculate prompts from the **"complex" category**, set the **"--complex" parameter to True**; otherwise, set it to False. 
The output files are formatted as a json file named "vqa_result.json" in "examples/annotation_clip" directory.


#### 3-in-1:
```

export project_dir="3_in_1_eval/"

cd $project_dir

outpath="examples/"

python "3_in_1.py" --outpath=${outpath}

```
The output files are formatted as a json file named "vqa_result.json" in "examples/annotation_3_in_1" directory.

#### MiniGPT4-CoT:
If the category to be evaluated is one of color, shape and texture:
```

export project_dir=Minigpt4_CoT_eval

cd $project_dir

category="color"

img_file="examples/samples/"

output_path="examples/"

python mGPT_cot_attribute.py --category=${category} --img_file=${img_file} --output_path=${output_path} 



```

If the category to be evaluated is one of spatial, non-spatial and complex:
```

export project_dir=MiniGPT4_CoT_eval/

cd $project_dir

category="non-spatial"

img_file="examples/samples/"

output_path="examples"

python mGPT_cot_general.py --category=${category} --img_file=${img_file} --output_path=${output_path} 



```
The output files are formatted as a csv file named "mGPT_cot_output.csv" in output_path.



### Inference

Run the inference.py to visualize the image.

```

export pretrained_model_path="checkpoint/color/lora_weight_e357_s124500.pt.pt"
export prompt="A bathroom with green tile and a red shower curtain"
python inference.py --pretrained_model_path "${pretrained_model_path}" --prompt "${prompt}"
```

**Generate images for metric calculation.** Run the inference_eval.py to generate images in the test set. As stated in the paper, 10 images are generated per prompt for **metric calculation**, and we use the fixed seed across all methods.

You can specify the test set by changing the "from_file" parameter among {color_val.txt, shape_val.txt, texture_val.txt, spatial_val.txt, non_spatial_val.txt, complex_val.txt}.

```
export from_file="../examples/dataset/color_val.txt"
python inference_eval.py  --from_file "${from_file}"

```



### Citation

If you're using T2I-CompBench in your research or applications, please cite using this BibTeX:

```bibtex

@article{huang2023t2icompbench,

      title={T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional Text-to-image Generation}, 

      author={Kaiyi Huang and Kaiyue Sun and Enze Xie and Zhenguo Li and Xihui Liu},

      journal={arXiv preprint arXiv:2307.06350},

      year={2023},

}

```





  ### License



This project is licensed under the MIT License. See the "License.txt" file for details.