File size: 14,365 Bytes
d19a2db
 
 
 
 
 
60faff2
 
 
 
f14fefc
cbaf1f6
f14fefc
 
 
 
 
 
 
 
 
 
 
60faff2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81f57be
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
---
license: cc-by-nc-nd-4.0
language:
- zh
- en
---
# 🚩 RedBench (REDEdit-Bench)


<p align="center">
  <a href="https://huggingface.co/datasets/FireRedTeam/REDEdit-Bench">
    <img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-REDEdit--Bench-ffc107?logoColor=white" alt="Hugging Face Dataset" style="vertical-align: middle;">
  </a>
  <a href="https://github.com/FireRedTeam/FireRed-Image-Edit">
    <img src="https://img.shields.io/badge/GitHub-Code-black" alt="GitHub Code" style="vertical-align: middle;">
  </a>
  <a href="https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode">
    <img src="https://img.shields.io/badge/License-CC%20BY--NC--ND%204.0-lightgrey.svg" alt="License" style="vertical-align: middle;">
  </a>
  <a href="https://arxiv.org/abs/2602.13344">
    <img src="https://img.shields.io/badge/Report-arXiv-b5212f.svg?logo=arxiv" alt="Technical Report" style="vertical-align: middle;">
  </a>
</p>
## πŸ”₯ Introduction

**RedBench** (also known as REDEdit-Bench) is a comprehensive benchmark designed to evaluate the capabilities of current image editing models. 

Our main goal is to build more diverse scenarios and editing instructions that better align with human language. We collected over 3,000 images from the internet, and after careful expert-designed selection, we constructed **1,673 bilingual (Chinese–English) editing pairs** across **15 categories**.

πŸ“’ **Note on Dataset Size**: The original benchmark described in the paper consists of 1,673 image pairs. However, due to strict redistribution licensing restrictions on certain commercial assets, the public release version has been curated to **1,542 pairs**. This ensures full compliance with copyright laws while maintaining the diversity and quality of the benchmark.

## ✨ Key Features

- **πŸ—£οΈ Human-Aligned Instructions**: Diverse scenarios and editing instructions that closely mimic real-world human usage.
- **🌐 Bilingual Support**: Full support for both Chinese and English editing instructions.
- **πŸ›‘οΈ Quality Assurance**: Carefully curated by experts from a massive collection of source images.
- **🧩 Diverse Tasks**: Covers 15 distinct categories including Object Addition, Removal, Replacement, Style Transfer, and more.

## πŸ“‚ Data Structure & Examples

The dataset is organized in JSONL format. Each entry contains the image source, bilingual instructions, and the specific task category.

### Task Categories

The benchmark covers 15 different task categories:

| Category | Count | Description |
|----------|-------|-------------|
| add | 143 | Object Addition |
| adjust | 156 | Attribute Adjustment |
| background | 91 | Background Modification |
| beauty | 79 | Beauty Enhancement |
| color | 99 | Color Modification |
| compose | 100 | Image Composition |
| extract | 95 | Element Extraction |
| lowlevel | 47 | Low-level Processing |
| motion | 78 | Motion Addition |
| portrait | 102 | Portrait Editing |
| remove | 147 | Object Removal |
| replace | 140 | Object Replacement |
| stylize | 92 | Style Transfer |
| text | 123 | Text Editing |
| viewpoint | 50 | Viewpoint Change |
| all | 1542 | All Tasks |

### Sample Data

```json
{"id": "1", "source": "redbench/add/add-1.png", "a_to_b_instructions": "εœ¨ε›Ύη‰‡δΈ­η»Ώθ‰²ζ€η‰©δΈŠε’žεŠ δΈ€εͺδΈƒζ˜Ÿη“’θ™«", "a_to_b_instructions_eng": "Add a seven-spotted ladybug on the green plant in the picture", "task": "add"}
{"id": "2", "source": "redbench/add/add-2.png", "a_to_b_instructions": "εœ¨ε’–ε•‘ζ―ι‡ŒεŠ δΈ€δΈͺη™½θ‰²εΏƒε½’ζ‹‰θŠ±", "a_to_b_instructions_eng": "Add a white heart-shaped latte art in the coffee cup", "task": "add"}
{"id": "3", "source": "redbench/add/add-3.png", "a_to_b_instructions": "εœ¨ι©¬θ·―δΈŠε’žεŠ δΈ€δΈͺη©ΏθΏεŠ¨ζœθ·‘ζ­₯ηš„η”·δΊΊ", "a_to_b_instructions_eng": "Add a man running in sportswear on the road", "task": "add"}
```

# Generate Images

Before evaluating the model, you first need to use the provided JSONL file (which contains metadata information) along with the original image files to generate the corresponding edited images by editing model.

We provide the inference script `redbench_infer.py` for generating edited images. This script supports multi-GPU distributed inference using Accelerate.

## Dependencies
Install required dependencies:
```bash
pip install accelerate diffusers transformers pillow tqdm
```
Then download our dataset [REDEdit_Bench.tar](https://huggingface.co/datasets/FireRedTeam/REDEdit_Bench/resolve/main/REDEdit_Bench.tar?download=true). Please download the tar file and extract it.

## Usage

To run the inference script, use the following command:

```bash
accelerate launch --num_processes <num_gpus> redbench_infer.py --model-path <path_to_model> --jsonl-path <path_to_redbench_jsonl> --save-path <path_to_save_results>
```

### Arguments:
- `--model-path`: Path to the model. Default is `FireRedTeam/FireRed-Image-Edit-1.0`.
- `--lora-name`: Path to LoRA weights (optional).
- `--save-path`: Directory to save the generated images (required).
- `--jsonl-path`: Path to the JSONL file containing edit instructions (required).
- `--edit-task`: Specific task to process (e.g., `add`, `remove`, `stylize`). Default is `all`.
- `--save-key`: Key name for saving result path. Default is `result`.
- `--seed`: Random seed. Default is 43.
- `--lang`: Instruction language, cn or eng (default: cn).

### Example:

```bash
# Generate all edited images using 8 GPUs
accelerate launch --num_processes 8 redbench_infer.py \
    --model-path FireRedTeam/FireRed-Image-Edit-1.1 \
    --jsonl-path ./redbench.jsonl \
    --save-path ./edited_images_cn \
    --edit-task all \
    --lang cn
```

## Example Input/Output

### Input

A JSONL file containing image edit instructions (`redbench.jsonl`):

```jsonl
{"id": "1", "source": "redbench/add/add-1.png", "a_to_b_instructions": "εœ¨ε›Ύη‰‡δΈ­η»Ώθ‰²ζ€η‰©δΈŠε’žεŠ δΈ€εͺδΈƒζ˜Ÿη“’θ™«", "a_to_b_instructions_eng": "Add a seven-spotted ladybug on the green plant in the picture", "task": "add"}
{"id": "2", "source": "redbench/add/add-2.png", "a_to_b_instructions": "εœ¨ε’–ε•‘ζ―ι‡ŒεŠ δΈ€δΈͺη™½θ‰²εΏƒε½’ζ‹‰θŠ±", "a_to_b_instructions_eng": "Add a white heart-shaped latte art in the coffee cup", "task": "add"}
{"id": "3", "source": "redbench/adjust/adjust-144.png", "a_to_b_instructions": "ε°†ε€©η©Ίηš„ι’œθ‰²θ°ƒζˆζ›΄ζ·±ηš„θ“θ‰²", "a_to_b_instructions_eng": "Change the sky color to a deeper blue", "task": "adjust"}
```

A folder containing original images:

```folder
β”œβ”€β”€ redbench                    
β”‚   β”œβ”€β”€ add     
β”‚   β”‚   β”œβ”€β”€ add-1.png                 
β”‚   β”‚   β”œβ”€β”€ add-2.png                 
β”‚   β”‚   β”œβ”€β”€ ...                 
β”‚   β”œβ”€β”€ adjust                             
β”‚   β”‚   β”œβ”€β”€ adjust-144.png
β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ ...
```

### Output

A folder containing edited images:

```folder
# Without --multi-folder option:
β”œβ”€β”€ edited_images                    
β”‚   β”œβ”€β”€ 1.png                 
β”‚   β”œβ”€β”€ 2.png            
β”‚   β”œβ”€β”€ 3.png           
β”‚   ...            
β”‚   β”œβ”€β”€ result.jsonl

# With --multi-folder option:
β”œβ”€β”€ edited_images                    
β”‚   β”œβ”€β”€ add
β”‚   β”‚   β”œβ”€β”€ 1.png
β”‚   β”‚   β”œβ”€β”€ 2.png
β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ adjust
β”‚   β”‚   β”œβ”€β”€ 144.png
β”‚   β”‚   β”œβ”€β”€ ...
β”‚   ...
β”‚   β”œβ”€β”€ result.jsonl
``` 

# Image Editing Evaluation using Gemini-3-Flash

This project evaluates image editing processes using the **Gemini-3-Flash API**. The system processes a set of original and edited images, comparing them according to a predefined set of criteria, such as instruction adherence, image-editing quality, and detail preservation.

We provide the evaluation script `redbench_eval.py` for automated evaluation using Gemini.

## Overview

The goal of this project is to evaluate the quality of image editing processes using Gemini. The evaluation criteria include:
- **Instruction Adherence**: The edit must match the specified editing instructions.
- **Image-editing Quality**: The edit should appear seamless and natural.
- **Detail Preservation**: Regions not specified for editing should remain unchanged.

## Evaluation Criteria by Task Category

Different task categories use different evaluation metrics:

| Task Category | Metrics |
|---------------|---------|
| add, remove, replace, compose, extract | Prompt Compliance, Visual Seamlessness, Physical & Detail Fidelity |
| adjust, color, lowlevel | Prompt Compliance, Visual Seamlessness, Physical & Detail Fidelity |
| background, viewpoint | Prompt Compliance, Visual Seamlessness, Physical & Detail Fidelity |
| beauty, portrait | Prompt Compliance, Visual Seamlessness, Physical & Detail Fidelity |
| stylize | Style Fidelity, Content Preservation, Rendering Quality |
| motion | Prompt Compliance, Motion Realism, Visual Seamlessness |
| text | Text Fidelity, Visual Consistency, Background Preservation |

## Dependencies

```bash
pip install google-generativeai pillow tqdm
```

## Setup

1. **Gemini API Key**: Set your Gemini API key as an environment variable:
   ```bash
   export GEMINI_API_KEY="your-gemini-api-key"
   ```

2. **Images and JSON File**: You will need:
   - A folder containing the edited images (`--result_img_folder`).
   - A JSONL file containing edit instructions and metadata (`--edit_json`).
   - A JSON file containing evaluation prompts for each task category (`--prompts_json`).

## Usage

To run the evaluation script, use the following command:

```bash
python redbench_eval.py --result_img_folder <path_to_edited_images> --edit_json <path_to_redbench_jsonl> --prompts_json <path_to_prompts_json> --lang <language>
```

### Arguments:
- `--result_img_folder`: The directory containing the edited images (required).
- `--edit_json`: Path to the JSONL file containing edit instructions and metadata (required).
- `--prompts_json`: Path to the JSON file containing evaluation prompts for each task category (required).
- `--num_threads`: Number of concurrent threads. Default is 50.
- `--lang`: Instruction language, cn or eng (default: cn).

### Example:

```bash
python redbench_eval.py \
    --result_img_folder ./edited_images \
    --edit_json ./redbench.jsonl \
    --prompts_json ./prompts.json \
    --num_threads 50 \
    --lang cn
```

## Example Input/Output

### Input

A JSONL file containing image edit instructions (`redbench.jsonl`):

```jsonl
{"id": "1", "source": "redbench/add/add-1.png", "a_to_b_instructions": "εœ¨ε›Ύη‰‡δΈ­η»Ώθ‰²ζ€η‰©δΈŠε’žεŠ δΈ€εͺδΈƒζ˜Ÿη“’θ™«", "a_to_b_instructions_eng": "Add a seven-spotted ladybug on the green plant in the picture", "task": "add"}
{"id": "2", "source": "redbench/add/add-2.png", "a_to_b_instructions": "εœ¨ε’–ε•‘ζ―ι‡ŒεŠ δΈ€δΈͺη™½θ‰²εΏƒε½’ζ‹‰θŠ±", "a_to_b_instructions_eng": "Add a white heart-shaped latte art in the coffee cup", "task": "add"}
{"id": "3", "source": "redbench/adjust/adjust-144.png", "a_to_b_instructions": "ε°†ε€©η©Ίηš„ι’œθ‰²θ°ƒζˆζ›΄ζ·±ηš„θ“θ‰²", "a_to_b_instructions_eng": "Change the sky color to a deeper blue", "task": "adjust"}
```

A JSON file containing evaluation prompts for each task category (`prompts_json`):

```json
{
  "add": "\nYou are a data rater specializing in grading object addition edits. You will be given two images ...",
  "remove": "\nYou are a data rater specializing in grading object removal edits. You will be given two images ...",
  "adjust": "\nYou are a data rater specializing in grading attribute alteration edits. You will be given two images ....",
  "stylize": "\nYou are a data rater specializing in grading style transfer edits. You will be given an input image, a reference style...",
  ...
}
```

A folder containing edited images (with `--multi-folder` option from inference):

```folder
β”œβ”€β”€ edited_images                    
β”‚   β”œβ”€β”€ add
β”‚   β”‚   β”œβ”€β”€ 1.png                 
β”‚   β”‚   β”œβ”€β”€ 2.png
β”‚   β”‚   β”œβ”€β”€ ...                
β”‚   β”œβ”€β”€ adjust                             
β”‚   β”‚   β”œβ”€β”€ 144.png
β”‚   β”‚   ...
β”‚   ...                 
```

### Output

The script automatically computes and saves results in the result folder:

1. `result.json` - Detailed evaluation for each image:
```json
{
    "0": "Brief reasoning: A seven-spotted ladybug was successfully added on the green plant with natural color and placement.\nPrompt Compliance: 5\nVisual Seamlessness: 4\nPhysical & Detail Fidelity: 5",
    "1": "Brief reasoning: A white heart-shaped latte art was added in the coffee cup with good blending.\nPrompt Compliance: 5\nVisual Seamlessness: 4\nPhysical & Detail Fidelity: 4",
    "2": "Brief reasoning: The sky color was changed to a deeper blue with smooth transition.\nPrompt Compliance: 5\nVisual Seamlessness: 4\nPhysical & Detail Fidelity: 5",
    ...
}
```

2. `score.json` - Final scores including per-category averages and overall score:
```json
{
    "final_score": 4.3,
    "averaged_result": {
        "add": 4.5,
        "adjust": 4.2,
        "background": 3.8,
        ...
    },
    "averaged_data": {
        "0": 4.67,
        "1": 4.33,
        "2": 4.67,
        ...
    }
}
```

The `redbench_eval.py` script automatically computes:
1. Individual image scores (extracted from Gemini responses)
2. Per-category averages (averaged_result)
3. Overall final score (average of all category scores)

See the Output section above for the complete score.json structure.

## 🧩 License
**REDEdit-Bench** is released under the [Creative Commons Attribution–NonCommercial–NoDerivatives (CC BY-NC-ND 4.0)](https://creativecommons.org/licenses/by-nc-nd/4.0/) license.

- βœ… **Free for academic research purposes only**
- ❌ **Commercial use is prohibited**

πŸ–ΌοΈ **Data Source:** All images included in REDEdit-Bench were legally purchased and obtained through official channels to ensure copyright compliance.

*By using this dataset, you agree to comply with the applicable license terms.*

## πŸ–ŠοΈ Citation

We kindly encourage citation of our work if you find it useful.

```bibtex
@article{firered2026rededit,
      title={FireRed-Image-Edit-1.0 Technical Report}, 
      author={Super Intelligence Team},
      year={2026},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2602.13344}, 
}
```