Datasets:
ydn commited on
Commit ·
76da556
1
Parent(s): 326f5f7
Add metadata.jsonl and dataset card README with image viewer columns
Browse files- README.md +569 -0
- metadata.jsonl +0 -0
README.md
ADDED
|
@@ -0,0 +1,569 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
pretty_name: TextEdit-Bench
|
| 3 |
+
license: mit
|
| 4 |
+
task_categories:
|
| 5 |
+
- image-to-image
|
| 6 |
+
tags:
|
| 7 |
+
- computer-vision
|
| 8 |
+
- image-editing
|
| 9 |
+
- benchmark
|
| 10 |
+
|
| 11 |
+
configs:
|
| 12 |
+
- config_name: default
|
| 13 |
+
data_files:
|
| 14 |
+
- split: train
|
| 15 |
+
path: metadata.jsonl
|
| 16 |
+
|
| 17 |
+
dataset_info:
|
| 18 |
+
features:
|
| 19 |
+
- name: original_image
|
| 20 |
+
dtype: image
|
| 21 |
+
- name: gt_image
|
| 22 |
+
dtype: image
|
| 23 |
+
- name: id
|
| 24 |
+
dtype: int64
|
| 25 |
+
- name: category
|
| 26 |
+
dtype: string
|
| 27 |
+
- name: source_text
|
| 28 |
+
dtype: string
|
| 29 |
+
- name: target_text
|
| 30 |
+
dtype: string
|
| 31 |
+
- name: prompt
|
| 32 |
+
dtype: string
|
| 33 |
+
- name: gt_caption
|
| 34 |
+
dtype: string
|
| 35 |
+
---
|
| 36 |
+
|
| 37 |
+
<div align="center">
|
| 38 |
+
|
| 39 |
+
# TextEdit: A High-Quality, Multi-Scenario Text Editing Benchmark for Generation Models
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
<p align="center">
|
| 43 |
+
<a href='https://arxiv.org/abs/2508.18265'>
|
| 44 |
+
<img src='https://img.shields.io/badge/Paper-2508.18265-brown?style=flat&logo=arXiv' alt='arXiv PDF'>
|
| 45 |
+
</a>
|
| 46 |
+
<a href='https://huggingface.co/collections/OpenGVLab/TextEdit'>
|
| 47 |
+
<img src='https://img.shields.io/badge/Huggingface-Data-blue?style=flat&logo=huggingface' alt='data img/data'>
|
| 48 |
+
</a>
|
| 49 |
+
|
| 50 |
+
[Danni Yang](https://scholar.google.com/citations?user=qDsgBJAAAAAJ&hl=zh-CN&oi=sra),
|
| 51 |
+
[Sitao Chen](https://github.com/fudan-chen),
|
| 52 |
+
[Changyao Tian](https://scholar.google.com/citations?user=kQ3AisQAAAAJ&hl=zh-CN&oi=ao)
|
| 53 |
+
|
| 54 |
+
If you find our work helpful, please give us a ⭐ or cite our paper. See the InternVL-U technical report appendix for more details.
|
| 55 |
+
|
| 56 |
+
</div>
|
| 57 |
+
|
| 58 |
+
## 🎉 News
|
| 59 |
+
- **[2026/02/25]** TextEdit benchmark released.
|
| 60 |
+
- **[2026/02/25]** Evaluation code and initial baselines released.
|
| 61 |
+
- **[2026/02/25]** Leaderboard updated with latest models.
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
## 📖 Introduction
|
| 66 |
+
<img src="assets/intro.png" width="100%">
|
| 67 |
+
Text editing is a fundamental yet challenging capability for modern image generation and editing models. An increasing number of powerful multimodal generation models, such as Qwen-Image and Nano-Banana-Pro, are emerging with strong text rendering and editing capabilities.
|
| 68 |
+
For text editing task, unlike general image editing, text manipulation requires:
|
| 69 |
+
|
| 70 |
+
- Precise spatial alignment
|
| 71 |
+
- Font and style consistency
|
| 72 |
+
- Background preservation
|
| 73 |
+
- Layout-constrained reasoning
|
| 74 |
+
|
| 75 |
+
We introduce **TextEdit**, a **high-quality**, **multi-scenario benchmark** designed to evaluate **fine-grained text editing capabilities** in image generation models.
|
| 76 |
+
|
| 77 |
+
TextEdit covers a diverse set of real-world and virtual scenarios, spanning **18 subcategories** with a total of **2,148 high-quality source images** and **manually annotated edited ground-truth images**.
|
| 78 |
+
|
| 79 |
+
To comprehensively assess model performance, we combine **classic OCR, image-fidelity metrics and modern multimodal LLM-based evaluation** across _target accuracy_, _text preservation_, _scene integrity_, _local realism_ and _visual coherence_. This dual-track protocol enables comprehensive assessment.
|
| 80 |
+
|
| 81 |
+
Our goal is to provide a **standardized, realistic, and scalable** benchmark for text editing research.
|
| 82 |
+
|
| 83 |
+
---
|
| 84 |
+
|
| 85 |
+
## 🏆 LeadBoard
|
| 86 |
+
<details>
|
| 87 |
+
<summary><strong>📊 Full Benchmark Results</strong></summary>
|
| 88 |
+
<div style="max-width:1050px; margin:auto;">
|
| 89 |
+
|
| 90 |
+
<table>
|
| 91 |
+
<thead>
|
| 92 |
+
<tr>
|
| 93 |
+
<th rowspan="2" align="left">Models</th>
|
| 94 |
+
<th rowspan="2" align="center"># Params</th>
|
| 95 |
+
<th colspan="7" align="center">Real</th>
|
| 96 |
+
<th colspan="7" align="center">Virtual</th>
|
| 97 |
+
</tr>
|
| 98 |
+
<tr>
|
| 99 |
+
<th>OA</th>
|
| 100 |
+
<th>OP</th>
|
| 101 |
+
<th>OR</th>
|
| 102 |
+
<th>F1</th>
|
| 103 |
+
<th>NED</th>
|
| 104 |
+
<th>CLIP</th>
|
| 105 |
+
<th>AES</th>
|
| 106 |
+
<th>OA</th>
|
| 107 |
+
<th>OP</th>
|
| 108 |
+
<th>OR</th>
|
| 109 |
+
<th>F1</th>
|
| 110 |
+
<th>NED</th>
|
| 111 |
+
<th>CLIP</th>
|
| 112 |
+
<th>AES</th>
|
| 113 |
+
</tr>
|
| 114 |
+
</thead>
|
| 115 |
+
<tbody>
|
| 116 |
+
<tr>
|
| 117 |
+
<td colspan="16"><strong><em>Generation Models</em></strong></td>
|
| 118 |
+
</tr>
|
| 119 |
+
<tr>
|
| 120 |
+
<td>Qwen-Image-Edit</td>
|
| 121 |
+
<td align="center">20B</td>
|
| 122 |
+
<td>0.75</td><td>0.68</td><td>0.66</td><td>0.67</td><td>0.71</td><td>0.75</td><td>5.72</td>
|
| 123 |
+
<td>0.78</td><td>0.75</td><td>0.73</td><td>0.74</td><td>0.75</td><td>0.81</td><td>5.21</td>
|
| 124 |
+
</tr>
|
| 125 |
+
<tr>
|
| 126 |
+
<td>GPT-Image-1.5</td>
|
| 127 |
+
<td align="center">-</td>
|
| 128 |
+
<td>0.74</td><td>0.69</td><td>0.67</td><td>0.68</td><td>0.68</td><td>0.75</td><td>5.78</td>
|
| 129 |
+
<td>0.73</td><td>0.72</td><td>0.71</td><td>0.71</td><td>0.70</td><td>0.80</td><td>5.28</td>
|
| 130 |
+
</tr>
|
| 131 |
+
<tr>
|
| 132 |
+
<td>Nano Banana Pro</td>
|
| 133 |
+
<td align="center">-</td>
|
| 134 |
+
<td>0.77</td><td>0.72</td><td>0.70</td><td>0.71</td><td>0.72</td><td>0.75</td><td>5.79</td>
|
| 135 |
+
<td>0.80</td><td>0.78</td><td>0.77</td><td>0.78</td><td>0.78</td><td>0.81</td><td>5.28</td>
|
| 136 |
+
</tr>
|
| 137 |
+
|
| 138 |
+
<tr>
|
| 139 |
+
<td colspan="16"><strong><em>Unified Models</em></strong></td>
|
| 140 |
+
</tr>
|
| 141 |
+
<tr>
|
| 142 |
+
<td>Lumina-DiMOO</td>
|
| 143 |
+
<td align="center">8B</td>
|
| 144 |
+
<td>0.22</td><td>0.23</td><td>0.19</td><td>0.20</td><td>0.19</td><td>0.69</td><td>5.53</td>
|
| 145 |
+
<td>0.22</td><td>0.25</td><td>0.21</td><td>0.22</td><td>0.20</td><td>0.72</td><td>4.76</td>
|
| 146 |
+
</tr>
|
| 147 |
+
<tr>
|
| 148 |
+
<td>Ovis-U1</td>
|
| 149 |
+
<td align="center">2.4B+1.2B</td>
|
| 150 |
+
<td>0.40</td><td>0.37</td><td>0.34</td><td>0.35</td><td>0.35</td><td>0.72</td><td>5.32</td>
|
| 151 |
+
<td>0.37</td><td>0.40</td><td>0.38</td><td>0.39</td><td>0.33</td><td>0.75</td><td>4.66</td>
|
| 152 |
+
</tr>
|
| 153 |
+
<tr>
|
| 154 |
+
<td>BAGEL</td>
|
| 155 |
+
<td align="center">7B+7B</td>
|
| 156 |
+
<td>0.60</td><td>0.59</td><td>0.53</td><td>0.55</td><td>0.55</td><td>0.74</td><td>5.71</td>
|
| 157 |
+
<td>0.57</td><td>0.60</td><td>0.56</td><td>0.57</td><td>0.54</td><td>0.78</td><td>5.19</td>
|
| 158 |
+
</tr>
|
| 159 |
+
<tr>
|
| 160 |
+
<td><strong>InternVL-U (Ours)</strong></td>
|
| 161 |
+
<td align="center">2B+1.7B</td>
|
| 162 |
+
<td>0.77</td><td>0.73</td><td>0.70</td><td>0.71</td><td>0.72</td><td>0.75</td><td>5.70</td>
|
| 163 |
+
<td>0.79</td><td>0.77</td><td>0.75</td><td>0.75</td><td>0.77</td><td>0.80</td><td>5.12</td>
|
| 164 |
+
</tr>
|
| 165 |
+
</tbody>
|
| 166 |
+
</table>
|
| 167 |
+
|
| 168 |
+
</div>
|
| 169 |
+
|
| 170 |
+
<div style="max-width:1050px; margin:auto;">
|
| 171 |
+
|
| 172 |
+
<table>
|
| 173 |
+
<thead>
|
| 174 |
+
<tr>
|
| 175 |
+
<th rowspan="2" align="left">Models</th>
|
| 176 |
+
<th rowspan="2" align="center"># Params</th>
|
| 177 |
+
<th colspan="6" align="center">Real</th>
|
| 178 |
+
<th colspan="6" align="center">Virtual</th>
|
| 179 |
+
</tr>
|
| 180 |
+
<tr>
|
| 181 |
+
<th>TA</th>
|
| 182 |
+
<th>TP</th>
|
| 183 |
+
<th>SI</th>
|
| 184 |
+
<th>LR</th>
|
| 185 |
+
<th>VC</th>
|
| 186 |
+
<th>Avg</th>
|
| 187 |
+
<th>TA</th>
|
| 188 |
+
<th>TP</th>
|
| 189 |
+
<th>SI</th>
|
| 190 |
+
<th>LR</th>
|
| 191 |
+
<th>VC</th>
|
| 192 |
+
<th>Avg</th>
|
| 193 |
+
</tr>
|
| 194 |
+
</thead>
|
| 195 |
+
<tbody>
|
| 196 |
+
<tr>
|
| 197 |
+
<td colspan="14"><strong><em>Generation Models</em></strong></td>
|
| 198 |
+
</tr>
|
| 199 |
+
<tr>
|
| 200 |
+
<td>Qwen-Image-Edit</td>
|
| 201 |
+
<td align="center">20B</td>
|
| 202 |
+
<td>0.92</td><td>0.82</td><td>0.75</td><td>0.57</td><td>0.80</td><td>0.77</td>
|
| 203 |
+
<td>0.57</td><td>0.79</td><td>0.92</td><td>0.80</td><td>0.77</td><td>0.77</td>
|
| 204 |
+
</tr>
|
| 205 |
+
<tr>
|
| 206 |
+
<td>GPT-Image-1.5</td>
|
| 207 |
+
<td align="center">-</td>
|
| 208 |
+
<td>0.96</td><td>0.94</td><td>0.86</td><td>0.80</td><td>0.93</td><td>0.90</td>
|
| 209 |
+
<td>0.82</td><td>0.93</td><td>0.96</td><td>0.91</td><td>0.87</td><td>0.90</td>
|
| 210 |
+
</tr>
|
| 211 |
+
<tr>
|
| 212 |
+
<td>Nano Banana Pro</td>
|
| 213 |
+
<td align="center">-</td>
|
| 214 |
+
<td>0.96</td><td>0.95</td><td>0.85</td><td>0.88</td><td>0.93</td><td>0.91</td>
|
| 215 |
+
<td>0.87</td><td>0.92</td><td>0.96</td><td>0.94</td><td>0.89</td><td>0.92</td>
|
| 216 |
+
</tr>
|
| 217 |
+
<tr>
|
| 218 |
+
<td colspan="14"><strong><em>Unified Models</em></strong></td>
|
| 219 |
+
</tr>
|
| 220 |
+
<tr>
|
| 221 |
+
<td>Lumina-DiMOO</td>
|
| 222 |
+
<td align="center">8B</td>
|
| 223 |
+
<td>0.17</td><td>0.06</td><td>0.04</td><td>0.02</td><td>0.05</td><td>0.09</td>
|
| 224 |
+
<td>0.02</td><td>0.06</td><td>0.16</td><td>0.05</td><td>0.03</td><td>0.08</td>
|
| 225 |
+
</tr>
|
| 226 |
+
<tr>
|
| 227 |
+
<td>Ovis-U1</td>
|
| 228 |
+
<td align="center">2.4B+1.2B</td>
|
| 229 |
+
<td>0.31</td><td>0.12</td><td>0.12</td><td>0.07</td><td>0.18</td><td>0.18</td>
|
| 230 |
+
<td>0.06</td><td>0.16</td><td>0.31</td><td>0.14</td><td>0.13</td><td>0.19</td>
|
| 231 |
+
</tr>
|
| 232 |
+
<tr>
|
| 233 |
+
<td>BAGEL</td>
|
| 234 |
+
<td align="center">7B+7B</td>
|
| 235 |
+
<td>0.68</td><td>0.60</td><td>0.38</td><td>0.35</td><td>0.56</td><td>0.53</td>
|
| 236 |
+
<td>0.38</td><td>0.51</td><td>0.68</td><td>0.62</td><td>0.42</td><td>0.54</td>
|
| 237 |
+
</tr>
|
| 238 |
+
<tr>
|
| 239 |
+
<td><strong>InternVL-U (Ours)</strong></td>
|
| 240 |
+
<td align="center">2B+1.7B</td>
|
| 241 |
+
<td>0.94</td><td>0.90</td><td>0.71</td><td>0.80</td><td>0.80</td><td>0.88</td>
|
| 242 |
+
<td>0.87</td><td>0.86</td><td>0.91</td><td>0.82</td><td>0.62</td><td>0.83</td>
|
| 243 |
+
</tr>
|
| 244 |
+
</tbody>
|
| 245 |
+
</table>
|
| 246 |
+
|
| 247 |
+
</div>
|
| 248 |
+
</details>
|
| 249 |
+
|
| 250 |
+
<details>
|
| 251 |
+
<summary><strong>📊 Mini-set Benchmark Results(500 samples)</strong></summary>
|
| 252 |
+
<div style="max-width:1050px; margin:auto;">
|
| 253 |
+
<table>
|
| 254 |
+
<thead>
|
| 255 |
+
<tr>
|
| 256 |
+
<th rowspan="2" align="left">Models</th>
|
| 257 |
+
<th rowspan="2" align="center"># Params</th>
|
| 258 |
+
<th colspan="7" align="center">Real</th>
|
| 259 |
+
<th colspan="7" align="center">Virtual</th>
|
| 260 |
+
</tr>
|
| 261 |
+
<tr>
|
| 262 |
+
<th>OA</th>
|
| 263 |
+
<th>OP</th>
|
| 264 |
+
<th>OR</th>
|
| 265 |
+
<th>F1</th>
|
| 266 |
+
<th>NED</th>
|
| 267 |
+
<th>CLIP</th>
|
| 268 |
+
<th>AES</th>
|
| 269 |
+
<th>OA</th>
|
| 270 |
+
<th>OP</th>
|
| 271 |
+
<th>OR</th>
|
| 272 |
+
<th>F1</th>
|
| 273 |
+
<th>NED</th>
|
| 274 |
+
<th>CLIP</th>
|
| 275 |
+
<th>AES</th>
|
| 276 |
+
</tr>
|
| 277 |
+
</thead>
|
| 278 |
+
<tbody>
|
| 279 |
+
<tr>
|
| 280 |
+
<td colspan="16"><strong><em>Generation Models</em></strong></td>
|
| 281 |
+
</tr>
|
| 282 |
+
<tr>
|
| 283 |
+
<td>Qwen-Image-Edit</td>
|
| 284 |
+
<td align="center">20B</td>
|
| 285 |
+
<td>0.76</td><td>0.69</td><td>0.67</td><td>0.67</td><td>0.70</td><td>0.75</td><td>5.81</td>
|
| 286 |
+
<td>0.74</td><td>0.71</td><td>0.70</td><td>0.70</td><td>0.70</td><td>0.80</td><td>5.27</td>
|
| 287 |
+
</tr>
|
| 288 |
+
<tr>
|
| 289 |
+
<td>GPT-Image-1.5</td>
|
| 290 |
+
<td align="center">-</td>
|
| 291 |
+
<td>0.72</td><td>0.68</td><td>0.66</td><td>0.67</td><td>0.67</td><td>0.75</td><td>5.85</td>
|
| 292 |
+
<td>0.68</td><td>0.69</td><td>0.68</td><td>0.68</td><td>0.65</td><td>0.80</td><td>5.32</td>
|
| 293 |
+
</tr>
|
| 294 |
+
<tr>
|
| 295 |
+
<td>Nano Banana Pro</td>
|
| 296 |
+
<td align="center">-</td>
|
| 297 |
+
<td>0.76</td><td>0.71</td><td>0.69</td><td>0.70</td><td>0.70</td><td>0.75</td><td>5.86</td>
|
| 298 |
+
<td>0.77</td><td>0.76</td><td>0.75</td><td>0.75</td><td>0.76</td><td>0.81</td><td>5.32</td>
|
| 299 |
+
</tr>
|
| 300 |
+
<tr>
|
| 301 |
+
<td colspan="16"><strong><em>Unified Models</em></strong></td>
|
| 302 |
+
</tr>
|
| 303 |
+
<tr>
|
| 304 |
+
<td>Lumina-DiMOO</td>
|
| 305 |
+
<td align="center">8B</td>
|
| 306 |
+
<td>0.20</td><td>0.22</td><td>0.18</td><td>0.19</td><td>0.19</td><td>0.70</td><td>5.58</td>
|
| 307 |
+
<td>0.22</td><td>0.25</td><td>0.21</td><td>0.22</td><td>0.19</td><td>0.73</td><td>4.87</td>
|
| 308 |
+
</tr>
|
| 309 |
+
<tr>
|
| 310 |
+
<td>Ovis-U1</td>
|
| 311 |
+
<td align="center">2.4B+1.2B</td>
|
| 312 |
+
<td>0.37</td><td>0.34</td><td>0.32</td><td>0.32</td><td>0.33</td><td>0.72</td><td>5.39</td>
|
| 313 |
+
<td>0.39</td><td>0.41</td><td>0.38</td><td>0.39</td><td>0.33</td><td>0.74</td><td>4.75</td>
|
| 314 |
+
</tr>
|
| 315 |
+
<tr>
|
| 316 |
+
<td>BAGEL</td>
|
| 317 |
+
<td align="center">7B+7B</td>
|
| 318 |
+
<td>0.61</td><td>0.59</td><td>0.52</td><td>0.54</td><td>0.54</td><td>0.74</td><td>5.79</td>
|
| 319 |
+
<td>0.53</td><td>0.58</td><td>0.53</td><td>0.55</td><td>0.51</td><td>0.78</td><td>5.25</td>
|
| 320 |
+
</tr>
|
| 321 |
+
<tr>
|
| 322 |
+
<td><strong>InternVL-U (Ours)</strong></td>
|
| 323 |
+
<td align="center">2B+1.7B</td>
|
| 324 |
+
<td>0.77</td><td>0.74</td><td>0.70</td><td>0.71</td><td>0.71</td><td>0.76</td><td>5.79</td>
|
| 325 |
+
<td>0.74</td><td>0.72</td><td>0.69</td><td>0.70</td><td>0.72</td><td>0.79</td><td>5.14</td>
|
| 326 |
+
</tr>
|
| 327 |
+
</tbody>
|
| 328 |
+
</table>
|
| 329 |
+
</div>
|
| 330 |
+
|
| 331 |
+
|
| 332 |
+
<div style="max-width:1050px; margin:auto;">
|
| 333 |
+
<table>
|
| 334 |
+
<thead>
|
| 335 |
+
<tr>
|
| 336 |
+
<th rowspan="2" align="left">Models</th>
|
| 337 |
+
<th rowspan="2" align="center"># Params</th>
|
| 338 |
+
<th colspan="6" align="center">Real</th>
|
| 339 |
+
<th colspan="6" align="center">Virtual</th>
|
| 340 |
+
</tr>
|
| 341 |
+
<tr>
|
| 342 |
+
<th>TA</th>
|
| 343 |
+
<th>TP</th>
|
| 344 |
+
<th>SI</th>
|
| 345 |
+
<th>LR</th>
|
| 346 |
+
<th>VC</th>
|
| 347 |
+
<th>Avg</th>
|
| 348 |
+
<th>TA</th>
|
| 349 |
+
<th>TP</th>
|
| 350 |
+
<th>SI</th>
|
| 351 |
+
<th>LR</th>
|
| 352 |
+
<th>VC</th>
|
| 353 |
+
<th>Avg</th>
|
| 354 |
+
</tr>
|
| 355 |
+
</thead>
|
| 356 |
+
<tbody>
|
| 357 |
+
<tr>
|
| 358 |
+
<td colspan="14"><strong><em>Generation Models</em></strong></td>
|
| 359 |
+
</tr>
|
| 360 |
+
<tr>
|
| 361 |
+
<td>Qwen-Image-Edit</td>
|
| 362 |
+
<td align="center">20B</td>
|
| 363 |
+
<td>0.93</td><td>0.85</td><td>0.77</td><td>0.55</td><td>0.78</td><td>0.80</td>
|
| 364 |
+
<td>0.60</td><td>0.82</td><td>0.91</td><td>0.81</td><td>0.74</td><td>0.76</td>
|
| 365 |
+
</tr>
|
| 366 |
+
<tr>
|
| 367 |
+
<td>GPT-Image-1.5</td>
|
| 368 |
+
<td align="center">-</td>
|
| 369 |
+
<td>0.97</td><td>0.94</td><td>0.86</td><td>0.79</td><td>0.92</td><td>0.91</td>
|
| 370 |
+
<td>0.85</td><td>0.93</td><td>0.95</td><td>0.92</td><td>0.83</td><td>0.88</td>
|
| 371 |
+
</tr>
|
| 372 |
+
<tr>
|
| 373 |
+
<td>Nano Banana Pro</td>
|
| 374 |
+
<td align="center">-</td>
|
| 375 |
+
<td>0.96</td><td>0.95</td><td>0.85</td><td>0.86</td><td>0.92</td><td>0.91</td>
|
| 376 |
+
<td>0.87</td><td>0.92</td><td>0.96</td><td>0.93</td><td>0.87</td><td>0.92</td>
|
| 377 |
+
</tr>
|
| 378 |
+
<tr>
|
| 379 |
+
<td colspan="14"><strong><em>Unified Models</em></strong></td>
|
| 380 |
+
</tr>
|
| 381 |
+
<tr>
|
| 382 |
+
<td>Lumina-DiMOO</td>
|
| 383 |
+
<td align="center">8B</td>
|
| 384 |
+
<td>0.16</td><td>0.04</td><td>0.04</td><td>0.02</td><td>0.06</td><td>0.08</td>
|
| 385 |
+
<td>0.02</td><td>0.05</td><td>0.19</td><td>0.07</td><td>0.03</td><td>0.10</td>
|
| 386 |
+
</tr>
|
| 387 |
+
<tr>
|
| 388 |
+
<td>Ovis-U1</td>
|
| 389 |
+
<td align="center">2.4B+1.2B</td>
|
| 390 |
+
<td>0.29</td><td>0.11</td><td>0.11</td><td>0.08</td><td>0.20</td><td>0.17</td>
|
| 391 |
+
<td>0.04</td><td>0.16</td><td>0.35</td><td>0.18</td><td>0.15</td><td>0.22</td>
|
| 392 |
+
</tr>
|
| 393 |
+
<tr>
|
| 394 |
+
<td>BAGEL</td>
|
| 395 |
+
<td align="center">7B+7B</td>
|
| 396 |
+
<td>0.68</td><td>0.61</td><td>0.38</td><td>0.34</td><td>0.59</td><td>0.53</td>
|
| 397 |
+
<td>0.36</td><td>0.52</td><td>0.69</td><td>0.64</td><td>0.40</td><td>0.54</td>
|
| 398 |
+
</tr>
|
| 399 |
+
<tr>
|
| 400 |
+
<td><strong>InternVL-U (Ours)</strong></td>
|
| 401 |
+
<td align="center">2B+1.7B</td>
|
| 402 |
+
<td>0.94</td><td>0.91</td><td>0.72</td><td>0.73</td><td>0.75</td><td>0.89</td>
|
| 403 |
+
<td>0.88</td><td>0.87</td><td>0.90</td><td>0.78</td><td>0.57</td><td>0.79</td>
|
| 404 |
+
</tr>
|
| 405 |
+
</tbody>
|
| 406 |
+
</table>
|
| 407 |
+
</div>
|
| 408 |
+
|
| 409 |
+
</details>
|
| 410 |
+
|
| 411 |
+
## 🛠️ Quick Start
|
| 412 |
+
|
| 413 |
+
### 📂 1. Data Preparation
|
| 414 |
+
You can download images from [this page](https://huggingface.co/collections/OpenGVLab/TextEdit). The TextEdit benchmark data is organized under `data/` by and category:
|
| 415 |
+
- **Virtual** (categories `1.x.x`): Synthetic/virtual scene images
|
| 416 |
+
- **Real** (categories `2.x`): Real-world scene images
|
| 417 |
+
|
| 418 |
+
|
| 419 |
+
|
| 420 |
+
Evaluation prompts are provided under `eval_prompts/` in two subsets:
|
| 421 |
+
| Subset | Directory | Description |
|
| 422 |
+
|--------|-----------|-------------|
|
| 423 |
+
| **Fullset** | `eval_prompts/fullset/` | Complete benchmark with all samples |
|
| 424 |
+
| **Miniset (500)** | `eval_prompts/miniset/` | 500-sample subset uniformly sampled from the fullset |
|
| 425 |
+
|
| 426 |
+
Each `.jsonl` file contains per-sample fields: `id`, `prompt`, `original_image`, `gt_image`, `source_text`, `target_text`, `gt_caption`.
|
| 427 |
+
|
| 428 |
+
### 🤖 2. Model Output Preparation
|
| 429 |
+
You need to use your model to perform image editing inference process. Please organize the outputs in the folder structure shown below to facilitate evaluation.
|
| 430 |
+
```
|
| 431 |
+
output/
|
| 432 |
+
├── internvl-u/ # Your Model Name
|
| 433 |
+
│ ├── 1.1.1 # Category Name
|
| 434 |
+
│ ├── 1007088003726.0.jpg # Model Output Images
|
| 435 |
+
│ ├── 1013932004096.0.jpg
|
| 436 |
+
│ ├── ...
|
| 437 |
+
│ ├── 1.1.2
|
| 438 |
+
│ ├── 1.1.3
|
| 439 |
+
│ ├── ...
|
| 440 |
+
│ └── 2.7
|
| 441 |
+
```
|
| 442 |
+
|
| 443 |
+
### 📏 3. Model Evaluation
|
| 444 |
+
#### 3.1 Classic Metrics Evaluation
|
| 445 |
+
Classic metrics evaluate text editing quality using **OCR-based text accuracy**, **image-text alignment**, and **aesthetic quality**. All metrics are reported separately for **Virtual** and **Real** splits.
|
| 446 |
+
|
| 447 |
+
#### Evaluated Metrics
|
| 448 |
+
|
| 449 |
+
| Abbreviation | Metric | Description |
|
| 450 |
+
|:---:|---|---|
|
| 451 |
+
| **OA** | OCR Accuracy | Whether the target text is correctly rendered in the editing region |
|
| 452 |
+
| **OP** | OCR Precision | Precision of text content (target + background) in the generated image |
|
| 453 |
+
| **OR** | OCR Recall | Recall of text content (target + background) in the generated image |
|
| 454 |
+
| **F1** | OCR F1 | Harmonic mean of OCR Precision and Recall |
|
| 455 |
+
| **NED** | Normalized Edit Distance | ROI-aware normalized edit distance between target and generated text |
|
| 456 |
+
| **CLIP** | CLIPScore | CLIP-based image-text alignment score |
|
| 457 |
+
| **AES** | Aesthetic Score | Predicted aesthetic quality score of the generated image |
|
| 458 |
+
|
| 459 |
+
#### Usage
|
| 460 |
+
|
| 461 |
+
Evaluation scripts are provided separately for **fullset** and **miniset**:
|
| 462 |
+
- `eval_scripts/classic_metrics_eval_full.sh` — evaluate on the full benchmark
|
| 463 |
+
- `eval_scripts/classic_metrics_eval_mini.sh` — evaluate on the 500-sample miniset
|
| 464 |
+
|
| 465 |
+
**Step 1. Modify the contents of the configure script according to your project directory.** (e.g., `eval_scripts/classic_metrics_eval_full.sh`):
|
| 466 |
+
|
| 467 |
+
```bash
|
| 468 |
+
MODELS="model-a,model-b,model-c" # Comma-separated list of model names to be evaluated
|
| 469 |
+
|
| 470 |
+
path="your_project_path_here"
|
| 471 |
+
CACHE_DIR="$path/TextEdit/checkpoint" # Directory for all model checkpoints (OCR, CLIP, etc.)
|
| 472 |
+
|
| 473 |
+
BENCHMARK_DIR="$path/TextEdit/eval_prompts/fullset"
|
| 474 |
+
GT_ROOT_DIR="$path/TextEdit/data" # Root path for original & GT images
|
| 475 |
+
MODEL_OUTPUT_ROOT="$path/TextEdit/output" # Root path for model infer outputs
|
| 476 |
+
OUTPUT_DIR="$path/TextEdit/result/classic_fullset" # Evaluation result root path for classic metric
|
| 477 |
+
```
|
| 478 |
+
|
| 479 |
+
> **Note:** All required model checkpoints (PaddleOCR, CLIP, aesthetic model, etc.) should be placed under the **`CACHE_DIR`** directory.
|
| 480 |
+
|
| 481 |
+
**Step 2.Run evaluation shell script to evaluate your model output.**
|
| 482 |
+
|
| 483 |
+
```bash
|
| 484 |
+
# Fullset evaluation
|
| 485 |
+
bash eval_scripts/classic_metrics_eval_full.sh
|
| 486 |
+
|
| 487 |
+
# Miniset evaluation
|
| 488 |
+
bash eval_scripts/classic_metrics_eval_mini.sh
|
| 489 |
+
```
|
| 490 |
+
|
| 491 |
+
Results are saved as `{model_name}.json` under the output directory, containing per-sample scores and aggregated metrics for both **Virtual** and **Real** splits.
|
| 492 |
+
|
| 493 |
+
---
|
| 494 |
+
#### 3.2 VLM-based Metrics Evaluation
|
| 495 |
+
|
| 496 |
+
Our VLM-based evaluation uses **Gemini-3-Pro-Preview** as an expert judge to score text editing quality across five fine-grained dimensions. The evaluation is a **two-step pipeline**.
|
| 497 |
+
|
| 498 |
+
#### Evaluated Metrics
|
| 499 |
+
|
| 500 |
+
| Abbreviation | Metric | Description |
|
| 501 |
+
|:---:|---|---|
|
| 502 |
+
| **TA** | Text Accuracy | Spelling correctness and completeness of the target text (1–5) |
|
| 503 |
+
| **TP** | Text Preservation | Preservation of non-target background text (1–5) |
|
| 504 |
+
| **SI** | Scene Integrity | Geometric stability of non-edited background areas (1–5) |
|
| 505 |
+
| **LR** | Local Realism | Inpainting quality, edge cleanness, and seamlessness (1–5) |
|
| 506 |
+
| **VC** | Visual Coherence | Style matching (font, lighting, shadow, texture harmony) (1–5) |
|
| 507 |
+
| **Avg** | Weighted Average | Weighted average of all five dimensions (default weights: 0.4 / 0.3 / 0.1 / 0.1 / 0.1) |
|
| 508 |
+
|
| 509 |
+
All raw scores (1–5) are normalized to 0–1 for reporting. A **cutoff mechanism** is available: if TA (Q1) < 4, the remaining dimensions are set to 0, reflecting that a failed text edit invalidates other quality dimensions.
|
| 510 |
+
|
| 511 |
+
#### Step 1: Gemini API Evaluation
|
| 512 |
+
|
| 513 |
+
Send (Original Image, GT Image, Edited Image) triplets to the Gemini API for scoring.
|
| 514 |
+
|
| 515 |
+
Configure and run `eval_scripts/vlm_metrics_eval_step1.sh`:
|
| 516 |
+
|
| 517 |
+
```bash
|
| 518 |
+
API_KEY="your_gemini_api_key_here"
|
| 519 |
+
BASE_URL="your_gemini_api_base_url_here"
|
| 520 |
+
|
| 521 |
+
python eval_pipeline/vlm_metrics_eval_step1.py \
|
| 522 |
+
--input_data_dir <your_path>/TextEdit/eval_prompts/fullset \
|
| 523 |
+
--model_output_root <your_path>/TextEdit/output \
|
| 524 |
+
--gt_data_root <your_path>/TextEdit/data \
|
| 525 |
+
--output_base_dir <your_path>/TextEdit/result/vlm_gemini_full_answers \
|
| 526 |
+
--model_name "gemini-3-pro-preview" \
|
| 527 |
+
--models "model-a,model-b,model-c" \
|
| 528 |
+
--api_key "$API_KEY" \
|
| 529 |
+
--base_url "$BASE_URL" \
|
| 530 |
+
--num_workers 64
|
| 531 |
+
```
|
| 532 |
+
|
| 533 |
+
Per-model `.jsonl` answer files are saved under the `output_base_dir`.
|
| 534 |
+
|
| 535 |
+
#### Step 2: Score Aggregation & Report
|
| 536 |
+
|
| 537 |
+
Aggregate the per-sample Gemini responses into a final report.
|
| 538 |
+
|
| 539 |
+
Configure and run `eval_scripts/vlm_metrics_eval_step2.sh`:
|
| 540 |
+
|
| 541 |
+
```bash
|
| 542 |
+
# Fullset report
|
| 543 |
+
python eval_pipeline/vlm_metrics_eval_step2.py \
|
| 544 |
+
--answer_dir <your_path>/TextEdit/result/vlm_gemini_full_answers \
|
| 545 |
+
--output_file <your_path>/TextEdit/result/gemini_report_fullset.json \
|
| 546 |
+
--weights 0.4 0.3 0.1 0.1 0.1 \
|
| 547 |
+
--enable_cutoff
|
| 548 |
+
|
| 549 |
+
# Miniset report
|
| 550 |
+
python eval_pipeline/vlm_metrics_eval_step2.py \
|
| 551 |
+
--answer_dir <your_path>/TextEdit/result/vlm_gemini_mini_answers \
|
| 552 |
+
--output_file <your_path>/TextEdit/result/gemini_report_miniset.json \
|
| 553 |
+
--weights 0.4 0.3 0.1 0.1 0.1 \
|
| 554 |
+
--enable_cutoff
|
| 555 |
+
```
|
| 556 |
+
|
| 557 |
+
**Key parameters:**
|
| 558 |
+
- `--weights`: Weights for Q1–Q5 (default: `0.4 0.3 0.1 0.1 0.1`).
|
| 559 |
+
- `--enable_cutoff`: Enable cutoff mechanism — if Q1 < 4, set Q2–Q5 to 0.
|
| 560 |
+
|
| 561 |
+
The output includes a JSON report, a CSV table, and a Markdown-formatted leaderboard printed to the console.
|
| 562 |
+
|
| 563 |
+
---
|
| 564 |
+
|
| 565 |
+
## 🎨 Visualization Ouput Example
|
| 566 |
+
<img src="assets/output.jpg" width="100%">
|
| 567 |
+
|
| 568 |
+
## Citation
|
| 569 |
+
If you find TextEdit Bench useful, please cite using this BibTeX.
|
metadata.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|