File size: 5,880 Bytes
2944e72
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
---
license: mit
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: val
    path: data/val-*
  - split: test
    path: data/test-*
dataset_info:
  features:
  - name: id
    dtype: int64
  - name: image_id
    dtype: string
  - name: image
    dtype: image
  - name: text
    dtype: string
  - name: caption
    dtype: string
  - name: prompt
    dtype: string
  - name: split
    dtype: string
  - name: ocr_confidence
    dtype: float64
  - name: ocr_backend
    dtype: string
  - name: caption_model
    dtype: string
  - name: source
    dtype: string
  - name: sharpness
    dtype: float64
  - name: brightness
    dtype: float64
  - name: contrast
    dtype: float64
  - name: resolution_w
    dtype: int64
  - name: resolution_h
    dtype: int64
  - name: text_length
    dtype: int64
  - name: word_count
    dtype: int64
  - name: phrase_reconstructed
    dtype: bool
  splits:
  - name: train
    num_bytes: 58573006
    num_examples: 800
  - name: val
    num_bytes: 6821157
    num_examples: 100
  - name: test
    num_bytes: 6848431
    num_examples: 100
  download_size: 72132017
  dataset_size: 72242594
task_categories:
  - image-to-text
  - text-to-image
language:
  - en
tags:
  - ocr
  - image-captioning
  - text-rendering
  - synthetic
  - blip2
  - easyocr
  - flux
size_categories:
  - 1K<n<10K
source_datasets:
  - stzhao/AnyWord-3M
---

# Text-in-Image OCR Dataset

*Built for **Project 12 — Efficient Image Generation**, as part of the ENSTA course [CSC_5IA21](https://giannifranchi.github.io/CSC_5IA21.html)*

**Team:** Adam Gassem · Asma Walha · Achraf Chaouch · Takoua Ben Aissa · Amaury Lorin  
**Tutors:** Arturo Mendoza Quispe · Nacim Belkhir

---

## Dataset Summary

A curated text-in-image dataset designed for fine-tuning text-to-image generative models (e.g. FLUX, Stable Diffusion, ControlNet) on accurate **text rendering**. Each sample pairs a real-world image containing readable text with:

- a verified OCR transcription (EasyOCR),
- a visual caption (BLIP-2),
- and a training prompt that embeds the OCR text verbatim.

Images are sourced from [AnyWord-3M](https://huggingface.co/datasets/stzhao/AnyWord-3M) and pass a rigorous multi-step quality pipeline before inclusion.

---

## Dataset Structure

| Split | Size |
|-------|------|
| train | 800 samples |
| val   | 100 samples |
| test  | 100 samples |

### Fields

| Field | Type | Description |
|-------|------|-------------|
| `image` | Image | The filtered image (512 px, JPEG) |
| `text` | string | Verified OCR text found in the image |
| `caption` | string | General visual description generated by BLIP-2 |
| `prompt` | string | Training prompt embedding the OCR text verbatim |
| `ocr_confidence` | float | EasyOCR confidence score (0–100) |
| `ocr_backend` | string | OCR engine used (`easyocr`) |
| `caption_model` | string | Captioning model used (`blip2` or `blip`) |
| `source` | string | AnyWord-3M subset of origin |
| `sharpness` | float | Laplacian variance of the image |
| `brightness` | float | Mean pixel brightness |
| `contrast` | float | Pixel standard deviation |
| `resolution_w` / `resolution_h` | int | Image dimensions in pixels |
| `text_length` | int | Character count of the OCR text |
| `word_count` | int | Word count of the OCR text |
| `phrase_reconstructed` | bool | Whether the full phrase was expanded beyond the bounding box |

### Sample record

```json
{
  "image": "<PIL.Image>",
  "text": "OPEN",
  "caption": "A storefront with a neon sign above the door.",
  "prompt": "A storefront with a neon sign above the door, with the text \"OPEN\" clearly visible",
  "ocr_confidence": 87.5,
  "source": "AnyWord-3M/laion",
  "sharpness": 142.3,
  "resolution_w": 512,
  "resolution_h": 384
}
```

---

## Usage

```python
from datasets import load_dataset

ds = load_dataset("your-org/your-dataset-name")

# Access a training sample
sample = ds["train"][0]
print(sample["prompt"])
sample["image"].show()
```

For fine-tuning with the prompt field:

```python
for sample in ds["train"]:
    image  = sample["image"]      # PIL image
    prompt = sample["prompt"]     # text-conditioned training caption
    text   = sample["text"]       # ground-truth OCR string
```

---

## Creation Pipeline

Images are drawn from AnyWord-3M (streamed) and pass through the following stages:

```
AnyWord-3M stream


1. Annotation filtering   → valid, short, English text regions only


2. Image quality gate     → resolution ≥ 256 px, sharpness ≥ 80,
                            brightness 30–230, contrast ≥ 20


3. EasyOCR verify         → confirm annotated text is readable (conf ≥ 0.40)


4. EasyOCR reconstruct    → expand to the full visible phrase


5. BLIP-2 caption         → general visual description


6. Prompt construction    → natural sentence with OCR text in quotes


7. Split & save           → 80 % train / 10 % val / 10 % test
```

---

## Source Subsets

| Subset | Description |
|--------|-------------|
| `laion` | Web-crawled natural images |
| `OCR_COCO_Text` | COCO scene text |
| `OCR_mlt2019` | Multi-language (English filtered) |
| `OCR_Art` | Artistic / designed text |

---

## Citation & Project

This dataset was produced as part of the **Efficient Image Generation** project at ENSTA Paris.  
Full methodology, training experiments, and inference benchmarks are documented in the [project report](https://drive.google.com/file/d/1ay4-cBOSt4LbLhwgQ0gBykda1Bu0HUXY/view?usp=drive_link).

---

## License

Released under the **MIT License** — free to use, modify, and distribute without restriction. Note that the AnyWord-3M source dataset and BLIP-2 model are subject to their own respective licenses on HuggingFace.