FireRedTeam commited on
Commit
60faff2
Β·
verified Β·
1 Parent(s): 5300d4b

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +313 -3
  2. REDEdit_Bench.parquet +3 -0
  3. REDEdit_Bench.tar +3 -0
README.md CHANGED
@@ -1,3 +1,313 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🚩 RedBench (REDEdit-Bench)
2
+
3
+
4
+ <p align="center">
5
+ <a href="https://huggingface.co/datasets/FireRedTeam/REDEdit-Bench" target="_blank"><img alt="Hugging Face Dataset" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-RedBench-ffc107?color=ffc107&logoColor=white" style="display: inline-block;"/></a>
6
+ <a href="https://github.com/FireRedTeam/FireRed-Image-Edit"><img src='https://img.shields.io/badge/GitHub-Code-black'></a>
7
+ <a href='https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode'><img src="https://img.shields.io/badge/License-CC%20BY--NC--ND%204.0-lightgrey.svg" alt="License"></a>
8
+ <a href="https://arxiv.org/abs/2602.13344" target="_blank"><img src="https://img.shields.io/badge/Report-b5212f.svg?logo=arxiv"></a>
9
+ </p>
10
+
11
+ ## πŸ”₯ Introduction
12
+
13
+ **RedBench** (also known as REDEdit-Bench) is a comprehensive benchmark designed to evaluate the capabilities of current image editing models.
14
+
15
+ Our main goal is to build more diverse scenarios and editing instructions that better align with human language. We collected over 3,000 images from the internet, and after careful expert-designed selection, we constructed **1,673 bilingual (Chinese–English) editing pairs** across **15 categories**.
16
+
17
+ πŸ“’ **Note on Dataset Size**: The original benchmark described in the paper consists of 1,673 image pairs. However, due to strict redistribution licensing restrictions on certain commercial assets, the public release version has been curated to **1,542 pairs**. This ensures full compliance with copyright laws while maintaining the diversity and quality of the benchmark.
18
+
19
+ ## ✨ Key Features
20
+
21
+ - **πŸ—£οΈ Human-Aligned Instructions**: Diverse scenarios and editing instructions that closely mimic real-world human usage.
22
+ - **🌐 Bilingual Support**: Full support for both Chinese and English editing instructions.
23
+ - **πŸ›‘οΈ Quality Assurance**: Carefully curated by experts from a massive collection of source images.
24
+ - **🧩 Diverse Tasks**: Covers 15 distinct categories including Object Addition, Removal, Replacement, Style Transfer, and more.
25
+
26
+ ## πŸ“‚ Data Structure & Examples
27
+
28
+ The dataset is organized in JSONL format. Each entry contains the image source, bilingual instructions, and the specific task category.
29
+
30
+ ### Task Categories
31
+
32
+ The benchmark covers 15 different task categories:
33
+
34
+ | Category | Count | Description |
35
+ |----------|-------|-------------|
36
+ | add | 143 | Object Addition |
37
+ | adjust | 156 | Attribute Adjustment |
38
+ | background | 91 | Background Modification |
39
+ | beauty | 79 | Beauty Enhancement |
40
+ | color | 99 | Color Modification |
41
+ | compose | 100 | Image Composition |
42
+ | extract | 95 | Element Extraction |
43
+ | lowlevel | 47 | Low-level Processing |
44
+ | motion | 78 | Motion Addition |
45
+ | portrait | 102 | Portrait Editing |
46
+ | remove | 147 | Object Removal |
47
+ | replace | 140 | Object Replacement |
48
+ | stylize | 92 | Style Transfer |
49
+ | text | 123 | Text Editing |
50
+ | viewpoint | 50 | Viewpoint Change |
51
+ | all | 1542 | All Tasks |
52
+
53
+ ### Sample Data
54
+
55
+ ```json
56
+ {"id": "1", "source": "redbench/add/add-1.png", "a_to_b_instructions": "εœ¨ε›Ύη‰‡δΈ­η»Ώθ‰²ζ€η‰©δΈŠε’žεŠ δΈ€εͺδΈƒζ˜Ÿη“’θ™«", "a_to_b_instructions_eng": "Add a seven-spotted ladybug on the green plant in the picture", "task": "add"}
57
+ {"id": "2", "source": "redbench/add/add-2.png", "a_to_b_instructions": "εœ¨ε’–ε•‘ζ―ι‡ŒεŠ δΈ€δΈͺη™½θ‰²εΏƒε½’ζ‹‰θŠ±", "a_to_b_instructions_eng": "Add a white heart-shaped latte art in the coffee cup", "task": "add"}
58
+ {"id": "3", "source": "redbench/add/add-3.png", "a_to_b_instructions": "εœ¨ι©¬θ·―δΈŠε’žεŠ δΈ€δΈͺη©ΏθΏεŠ¨ζœθ·‘ζ­₯ηš„η”·δΊΊ", "a_to_b_instructions_eng": "Add a man running in sportswear on the road", "task": "add"}
59
+ ```
60
+
61
+ # Generate Images
62
+
63
+ Before evaluating the model, you first need to use the provided JSONL file (which contains metadata information) along with the original image files to generate the corresponding edited images by editing model.
64
+
65
+ We provide the inference script `redbench_infer.py` for generating edited images. This script supports multi-GPU distributed inference using Accelerate.
66
+
67
+ ## Dependencies
68
+ Install required dependencies:
69
+ ```bash
70
+ pip install accelerate diffusers transformers pillow tqdm
71
+ ```
72
+ Then download our dataset [REDEdit_Bench.tar](https://huggingface.co/datasets/FireRedTeam/REDEdit_Bench/resolve/main/REDEdit_Bench.tar?download=true). Please download the tar file and extract it.
73
+
74
+ ## Usage
75
+
76
+ To run the inference script, use the following command:
77
+
78
+ ```bash
79
+ accelerate launch --num_processes <num_gpus> redbench_infer.py --model-path <path_to_model> --jsonl-path <path_to_redbench_jsonl> --save-path <path_to_save_results>
80
+ ```
81
+
82
+ ### Arguments:
83
+ - `--model-path`: Path to the model. Default is `FireRedTeam/FireRed-Image-Edit-1.0`.
84
+ - `--lora-name`: Path to LoRA weights (optional).
85
+ - `--save-path`: Directory to save the generated images (required).
86
+ - `--jsonl-path`: Path to the JSONL file containing edit instructions (required).
87
+ - `--edit-task`: Specific task to process (e.g., `add`, `remove`, `stylize`). Default is `all`.
88
+ - `--save-key`: Key name for saving result path. Default is `result`.
89
+ - `--seed`: Random seed. Default is 43.
90
+ - `--lang`: Instruction language, cn or eng (default: cn).
91
+
92
+ ### Example:
93
+
94
+ ```bash
95
+ # Generate all edited images using 8 GPUs
96
+ accelerate launch --num_processes 8 redbench_infer.py \
97
+ --model-path FireRedTeam/FireRed-Image-Edit-1.1 \
98
+ --jsonl-path ./redbench.jsonl \
99
+ --save-path ./edited_images_cn \
100
+ --edit-task all \
101
+ --lang cn
102
+ ```
103
+
104
+ ## Example Input/Output
105
+
106
+ ### Input
107
+
108
+ A JSONL file containing image edit instructions (`redbench.jsonl`):
109
+
110
+ ```jsonl
111
+ {"id": "1", "source": "redbench/add/add-1.png", "a_to_b_instructions": "εœ¨ε›Ύη‰‡δΈ­η»Ώθ‰²ζ€η‰©δΈŠε’žεŠ δΈ€εͺδΈƒζ˜Ÿη“’θ™«", "a_to_b_instructions_eng": "Add a seven-spotted ladybug on the green plant in the picture", "task": "add"}
112
+ {"id": "2", "source": "redbench/add/add-2.png", "a_to_b_instructions": "εœ¨ε’–ε•‘ζ―ι‡ŒεŠ δΈ€δΈͺη™½θ‰²εΏƒε½’ζ‹‰θŠ±", "a_to_b_instructions_eng": "Add a white heart-shaped latte art in the coffee cup", "task": "add"}
113
+ {"id": "3", "source": "redbench/adjust/adjust-144.png", "a_to_b_instructions": "ε°†ε€©η©Ίηš„ι’œθ‰²θ°ƒζˆζ›΄ζ·±ηš„θ“θ‰²", "a_to_b_instructions_eng": "Change the sky color to a deeper blue", "task": "adjust"}
114
+ ```
115
+
116
+ A folder containing original images:
117
+
118
+ ```folder
119
+ β”œβ”€β”€ redbench
120
+ β”‚ β”œβ”€β”€ add
121
+ β”‚ β”‚ β”œβ”€β”€ add-1.png
122
+ β”‚ β”‚ β”œβ”€β”€ add-2.png
123
+ β”‚ β”‚ β”œβ”€β”€ ...
124
+ β”‚ β”œβ”€β”€ adjust
125
+ β”‚ β”‚ β”œβ”€β”€ adjust-144.png
126
+ β”‚ β”‚ β”œβ”€β”€ ...
127
+ β”‚ β”œβ”€β”€ ...
128
+ ```
129
+
130
+ ### Output
131
+
132
+ A folder containing edited images:
133
+
134
+ ```folder
135
+ # Without --multi-folder option:
136
+ β”œβ”€β”€ edited_images
137
+ β”‚ β”œβ”€β”€ 1.png
138
+ β”‚ β”œβ”€β”€ 2.png
139
+ β”‚ β”œβ”€β”€ 3.png
140
+ β”‚ ...
141
+ β”‚ β”œβ”€β”€ result.jsonl
142
+
143
+ # With --multi-folder option:
144
+ β”œβ”€β”€ edited_images
145
+ β”‚ β”œβ”€β”€ add
146
+ β”‚ β”‚ β”œβ”€β”€ 1.png
147
+ β”‚ β”‚ β”œβ”€β”€ 2.png
148
+ β”‚ β”‚ β”œβ”€β”€ ...
149
+ β”‚ β”œβ”€β”€ adjust
150
+ β”‚ β”‚ β”œβ”€β”€ 144.png
151
+ β”‚ β”‚ β”œβ”€β”€ ...
152
+ β”‚ ...
153
+ β”‚ β”œβ”€β”€ result.jsonl
154
+ ```
155
+
156
+ # Image Editing Evaluation using Gemini-3-Flash
157
+
158
+ This project evaluates image editing processes using the **Gemini-3-Flash API**. The system processes a set of original and edited images, comparing them according to a predefined set of criteria, such as instruction adherence, image-editing quality, and detail preservation.
159
+
160
+ We provide the evaluation script `redbench_eval.py` for automated evaluation using Gemini.
161
+
162
+ ## Overview
163
+
164
+ The goal of this project is to evaluate the quality of image editing processes using Gemini. The evaluation criteria include:
165
+ - **Instruction Adherence**: The edit must match the specified editing instructions.
166
+ - **Image-editing Quality**: The edit should appear seamless and natural.
167
+ - **Detail Preservation**: Regions not specified for editing should remain unchanged.
168
+
169
+ ## Evaluation Criteria by Task Category
170
+
171
+ Different task categories use different evaluation metrics:
172
+
173
+ | Task Category | Metrics |
174
+ |---------------|---------|
175
+ | add, remove, replace, compose, extract | Prompt Compliance, Visual Seamlessness, Physical & Detail Fidelity |
176
+ | adjust, color, lowlevel | Prompt Compliance, Visual Seamlessness, Physical & Detail Fidelity |
177
+ | background, viewpoint | Prompt Compliance, Visual Seamlessness, Physical & Detail Fidelity |
178
+ | beauty, portrait | Prompt Compliance, Visual Seamlessness, Physical & Detail Fidelity |
179
+ | stylize | Style Fidelity, Content Preservation, Rendering Quality |
180
+ | motion | Prompt Compliance, Motion Realism, Visual Seamlessness |
181
+ | text | Text Fidelity, Visual Consistency, Background Preservation |
182
+
183
+ ## Dependencies
184
+
185
+ ```bash
186
+ pip install google-generativeai pillow tqdm
187
+ ```
188
+
189
+ ## Setup
190
+
191
+ 1. **Gemini API Key**: Set your Gemini API key as an environment variable:
192
+ ```bash
193
+ export GEMINI_API_KEY="your-gemini-api-key"
194
+ ```
195
+
196
+ 2. **Images and JSON File**: You will need:
197
+ - A folder containing the edited images (`--result_img_folder`).
198
+ - A JSONL file containing edit instructions and metadata (`--edit_json`).
199
+ - A JSON file containing evaluation prompts for each task category (`--prompts_json`).
200
+
201
+ ## Usage
202
+
203
+ To run the evaluation script, use the following command:
204
+
205
+ ```bash
206
+ python redbench_eval.py --result_img_folder <path_to_edited_images> --edit_json <path_to_redbench_jsonl> --prompts_json <path_to_prompts_json> --lang <language>
207
+ ```
208
+
209
+ ### Arguments:
210
+ - `--result_img_folder`: The directory containing the edited images (required).
211
+ - `--edit_json`: Path to the JSONL file containing edit instructions and metadata (required).
212
+ - `--prompts_json`: Path to the JSON file containing evaluation prompts for each task category (required).
213
+ - `--num_threads`: Number of concurrent threads. Default is 50.
214
+ - `--lang`: Instruction language, cn or eng (default: cn).
215
+
216
+ ### Example:
217
+
218
+ ```bash
219
+ python redbench_eval.py \
220
+ --result_img_folder ./edited_images \
221
+ --edit_json ./redbench.jsonl \
222
+ --prompts_json ./prompts.json \
223
+ --num_threads 50 \
224
+ --lang cn
225
+ ```
226
+
227
+ ## Example Input/Output
228
+
229
+ ### Input
230
+
231
+ A JSONL file containing image edit instructions (`redbench.jsonl`):
232
+
233
+ ```jsonl
234
+ {"id": "1", "source": "redbench/add/add-1.png", "a_to_b_instructions": "εœ¨ε›Ύη‰‡δΈ­η»Ώθ‰²ζ€η‰©δΈŠε’žεŠ δΈ€εͺδΈƒζ˜Ÿη“’θ™«", "a_to_b_instructions_eng": "Add a seven-spotted ladybug on the green plant in the picture", "task": "add"}
235
+ {"id": "2", "source": "redbench/add/add-2.png", "a_to_b_instructions": "εœ¨ε’–ε•‘ζ―ι‡ŒεŠ δΈ€δΈͺη™½θ‰²εΏƒε½’ζ‹‰θŠ±", "a_to_b_instructions_eng": "Add a white heart-shaped latte art in the coffee cup", "task": "add"}
236
+ {"id": "3", "source": "redbench/adjust/adjust-144.png", "a_to_b_instructions": "ε°†ε€©η©Ίηš„ι’œθ‰²θ°ƒζˆζ›΄ζ·±ηš„θ“θ‰²", "a_to_b_instructions_eng": "Change the sky color to a deeper blue", "task": "adjust"}
237
+ ```
238
+
239
+ A JSON file containing evaluation prompts for each task category (`prompts_json`):
240
+
241
+ ```json
242
+ {
243
+ "add": "\nYou are a data rater specializing in grading object addition edits. You will be given two images ...",
244
+ "remove": "\nYou are a data rater specializing in grading object removal edits. You will be given two images ...",
245
+ "adjust": "\nYou are a data rater specializing in grading attribute alteration edits. You will be given two images ....",
246
+ "stylize": "\nYou are a data rater specializing in grading style transfer edits. You will be given an input image, a reference style...",
247
+ ...
248
+ }
249
+ ```
250
+
251
+ A folder containing edited images (with `--multi-folder` option from inference):
252
+
253
+ ```folder
254
+ β”œβ”€β”€ edited_images
255
+ β”‚ β”œβ”€β”€ add
256
+ β”‚ β”‚ β”œβ”€β”€ 1.png
257
+ β”‚ β”‚ β”œβ”€β”€ 2.png
258
+ β”‚ β”‚ β”œβ”€β”€ ...
259
+ β”‚ β”œβ”€β”€ adjust
260
+ β”‚ β”‚ β”œβ”€β”€ 144.png
261
+ β”‚ β”‚ ...
262
+ β”‚ ...
263
+ ```
264
+
265
+ ### Output
266
+
267
+ The script automatically computes and saves results in the result folder:
268
+
269
+ 1. `result.json` - Detailed evaluation for each image:
270
+ ```json
271
+ {
272
+ "0": "Brief reasoning: A seven-spotted ladybug was successfully added on the green plant with natural color and placement.\nPrompt Compliance: 5\nVisual Seamlessness: 4\nPhysical & Detail Fidelity: 5",
273
+ "1": "Brief reasoning: A white heart-shaped latte art was added in the coffee cup with good blending.\nPrompt Compliance: 5\nVisual Seamlessness: 4\nPhysical & Detail Fidelity: 4",
274
+ "2": "Brief reasoning: The sky color was changed to a deeper blue with smooth transition.\nPrompt Compliance: 5\nVisual Seamlessness: 4\nPhysical & Detail Fidelity: 5",
275
+ ...
276
+ }
277
+ ```
278
+
279
+ 2. `score.json` - Final scores including per-category averages and overall score:
280
+ ```json
281
+ {
282
+ "final_score": 4.3,
283
+ "averaged_result": {
284
+ "add": 4.5,
285
+ "adjust": 4.2,
286
+ "background": 3.8,
287
+ ...
288
+ },
289
+ "averaged_data": {
290
+ "0": 4.67,
291
+ "1": 4.33,
292
+ "2": 4.67,
293
+ ...
294
+ }
295
+ }
296
+ ```
297
+
298
+ The `redbench_eval.py` script automatically computes:
299
+ 1. Individual image scores (extracted from Gemini responses)
300
+ 2. Per-category averages (averaged_result)
301
+ 3. Overall final score (average of all category scores)
302
+
303
+ See the Output section above for the complete score.json structure.
304
+
305
+ ## 🧩 License
306
+ **REDEdit-Bench** is released under the [Creative Commons Attribution–NonCommercial–NoDerivatives (CC BY-NC-ND 4.0)](https://creativecommons.org/licenses/by-nc-nd/4.0/) license.
307
+
308
+ - βœ… **Free for academic research purposes only**
309
+ - ❌ **Commercial use is prohibited**
310
+
311
+ πŸ–ΌοΈ **Data Source:** All images included in REDEdit-Bench were legally purchased and obtained through official channels to ensure copyright compliance.
312
+
313
+ *By using this dataset, you agree to comply with the applicable license terms.*
REDEdit_Bench.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3ad9b8c46a9a8485f7bda135c44a3444e549e6a87f10777ccf87d8e67d156bf
3
+ size 5571253284
REDEdit_Bench.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69310342c90b491372893028bb5acbdaa7d4570af1391e14ed01084d6dfc8864
3
+ size 5572106240