File size: 15,864 Bytes
1367fd8
 
 
 
 
 
 
 
 
 
dea500c
 
715b8df
dea500c
715b8df
dea500c
715b8df
 
 
dea500c
715b8df
 
 
 
 
dea500c
715b8df
dea500c
715b8df
 
 
dea500c
715b8df
 
 
dea500c
715b8df
 
 
dea500c
715b8df
 
 
dea500c
715b8df
dea500c
715b8df
dea500c
715b8df
 
 
 
 
dea500c
715b8df
dea500c
715b8df
dea500c
715b8df
dea500c
715b8df
 
 
dea500c
715b8df
 
 
dea500c
715b8df
 
 
 
 
 
 
 
dea500c
715b8df
 
 
dea500c
715b8df
 
dea500c
715b8df
dea500c
715b8df
dea500c
715b8df
 
 
 
 
 
 
 
 
 
 
 
 
dea500c
 
715b8df
dea500c
715b8df
 
dea500c
 
715b8df
dea500c
 
715b8df
 
dea500c
715b8df
 
 
 
dea500c
715b8df
 
 
dea500c
 
 
715b8df
 
 
dea500c
 
 
715b8df
 
dea500c
715b8df
dea500c
715b8df
 
dea500c
715b8df
dea500c
715b8df
dea500c
715b8df
 
dea500c
 
715b8df
 
 
 
 
 
dea500c
715b8df
dea500c
715b8df
 
dea500c
715b8df
 
 
 
 
 
 
dea500c
 
715b8df
dea500c
 
 
 
 
 
 
715b8df
dea500c
715b8df
 
 
 
 
 
 
 
 
dea500c
715b8df
 
 
 
dea500c
 
 
 
715b8df
dea500c
715b8df
 
 
 
dea500c
715b8df
 
 
 
 
 
dea500c
715b8df
 
 
dea500c
 
 
 
715b8df
dea500c
715b8df
 
 
dea500c
 
715b8df
 
 
 
 
 
 
 
 
 
dea500c
715b8df
dea500c
 
715b8df
 
dea500c
 
 
 
715b8df
dea500c
715b8df
 
 
dea500c
715b8df
dea500c
715b8df
 
 
 
dea500c
715b8df
 
 
 
 
 
 
 
dea500c
715b8df
 
 
 
 
 
dea500c
715b8df
 
 
 
 
 
 
 
dea500c
715b8df
 
 
 
 
 
 
dea500c
715b8df
 
 
 
 
 
 
 
 
 
 
 
 
 
dea500c
 
715b8df
dea500c
715b8df
 
 
 
 
 
 
 
 
 
 
 
 
dea500c
715b8df
dea500c
 
715b8df
dea500c
 
715b8df
dea500c
 
 
 
 
 
715b8df
dea500c
 
715b8df
 
 
 
dea500c
715b8df
 
dea500c
715b8df
 
 
dea500c
 
715b8df
 
 
dea500c
715b8df
 
 
 
 
 
 
dea500c
715b8df
 
 
 
dea500c
 
715b8df
 
 
0f8bf11
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
---
license: apache-2.0
task_categories:
- text-classification
tags:
- legal
size_categories:
- 1M<n<10M
---

**1.8 Million Synthetic Alphanumeric CAPTCHAs**
What's This All About?

Ever needed a big dataset of CAPTCHA images to train a solid recognition model but never managed to get your grubby hands on it? Me too. That's why I created this one.

This is a synthetic, artificially generated dataset of ~1.8 million CAPTCHA images. Every image includes a random, case-sensitive sequence of letters (a-z, A-Z) and numbers (0-9). It was specifically built for training and testing Optical Character Recognition (OCR) models, especially models like CRNN with a CTC loss function.

**What Can You Do With It?**

The snippet use here is Image-to-Text or Optical Character Recognition. You can utilize this dataset to:

Train a high-accuracy CAPTCHA solver from scratch.

Benchmark different OCR model architectures.

Experiment with data augmentation techniques on a big scale.

Pre-train a model on a more focused text recognition task.

**How is the Data Organized?**

The data set is straightforward. You will have two major files in the repository:

samples.zip: A big zip file containing all 1.8 million image files.

dataset.csv: Your gold key! One CSV file that links every image file to its correct text label. This is what you'll use to load the data.

**What's in the Files? (The Data Fields)**

The dataset.csv file has only a few columns, but you're really interested in just two of them:

filepath (string): The path to the image file as it was originally structured (e.g., samples/W9friJ/00001.png).

text (string): The actual text—the string of characters you see in the image (e.g., "W9friJ").

length (int): Text length.

(Ignore these): The width, height, and cycle columns were part of my generation process. You won't require them for training, as you will be resizing the images anyhow.

A Peek at the Data

Things to Keep in Mind (Limitations & Biases)

This data set is big but not perfect. Here are some qualifications:

It's Synthetic: These images are computer-generated. A model trained on this data will likely do well on similarly appearing synthetic CAPTCHAs but may need to be fine-tuned in order to work well on real-world CAPTCHAs from websites, which have different fonts, noise patterns, and distortions.

Relatively Clean: These CAPTCHAs are fairly straightforward. They do not possess dense occlusions, crazy background noise, or extreme character distorting.

English Alphanumeric Only: The characters are limited to a-z, A-Z, and 0-9. It will not support special characters or letters of other languages.

How Can You Use It? (License)

This data set is published under the Apache 2.0 license. This is an extremely permissive license which allows you to use, modify, and distribute the work for any purpose (private or commercial) with very few restrictions. Pretty much, go make something amazing with it!

How to Cite This Work

If you're using this dataset in your work or research, please don't hesitate to give credit by a citation. Here is a BibTeX entry you can use:

```
@misc{szilard_2024_captcha,
  author = {Pálnagy Szilárd},
  title = {1.8 Million Synthetic Alphanumeric CAPTCHAs},
  year = {2024},
  publisher = {Hugging Face},
  journal = {Hugging Face Hub},
  howpublished = {\\url{https://huggingface.co/datasets/szili2011/captcha-ocr-dataset}},
}
```

If you're using Google Colab to train models, use the code below:

```
# =============================================================================#
# Step 1: Install and Import Everything We Need
# =================================================================JavascriptAPIopled!
# First, let's get our environment ready by installing the Hugging Face libraries.
print("--> Step 1: Installing libraries.")
!pip install datasets -q
!pip install huggingface_hub -q

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
import os
import shutil
from datasets import load_dataset
from huggingface_hub import hf_hub_download, login
from google.colab import drive

print("
--> Libraries installed and imported successfully!")

# =================================================================================
# Step 2: Connect to Hugging Face and Google Drive
# ==============================================================================
# We will authenticate to Hugging Face so that we are able to download your dataset.
# You will be prompted to insert a token, which you may retrieve from:
# https://huggingface.co/settings/tokens
print("
--> Step 2: Connecting to services.")
login()

# We'll also connect to Google Drive to store our model once it's trained.
drive.mount('/content/drive')
print("--> Services connected!")



# ==============================================================================
# Step 3: Load Your Dataset from the Hub
# ==============================================================================
# Here, we load your wonderful dataset directly from the Hugging Face Hub.
print("
--> Step 3: Loading the dataset.")

DATASET_REPO = "szili2011/captcha-ocr-dataset"

# We first load the metadata from your 'dataset.csv' file.
# This won't download the images yet, just the text information.
print("   - Loading metadata from dataset.csv.")
dataset_info = load_dataset(DATASET_REPO, data_files="dataset.csv")

# Now we download the huge 'samples.zip' file. This will take a while!
# hf_hub_download is smart and caches the file so you don't re-download it every time.
print("   - Downloading samples.zip (this may take a long time).")
zip_path = hf_hub_download(repo_id=DATASET_REPO, filename="samples.zip", repo_type="dataset")

# Unzip the images into Colab's fast local storage for fast access during training.
UNZIPPED_PATH = "/content/images"
print(f"   - Unzipping images to {UNZIPPED_PATH}.")
if os.path.exists(UNZIPPED_PATH):
shutil.rmtree(UNZIPPED_PATH) # Cleaning up from previous runs
shutil.unpack_archive(zip_path, UNZIPPED_PATH, 'zip')

# The images are now at '/content/images/samples/'
# Now let's create a full path for each image in our dataset object.
SAMPLES_PATH = os.path.join(UNZIPPED_PATH, "samples")

def create_full_path(example):
    # The 'filepath' from your CSV is 'samples/xyz/123.png'.
    # We'll build the full, correct path to where we just unzipped the files.
    example['image_path'] = os.path.join(SAMPLES_PATH, *example['filepath'].split('/')[1:])
return example

print("   - Creating full image paths.")
dataset = dataset_info.map(create_full_path, num_proc=4) # num_proc > 1 speeds this up

# Clean up columns we no longer need.
dataset = dataset.remove_columns(['filepath', 'width', 'height', 'cycle', 'complexity', 'generation_mode', 'sample_number'])

# Split the data into a training set (95%) and a validation set (5%)
dataset = dataset["train"].train_test_split(test_size=0.05, seed=42)
train_ds = dataset["train"]
val_ds = dataset["test"]

print(f"
--> Dataset is ready! Training samples: {len(train_ds)}, Validation samples: {len(val_ds)}")

# =============================================================================
# Step 4: Create the character vocabulary
# ===========================================================================
# A model can't read 'A', 'B', 'C'. It requires numbers.
# Here, we create a mapping from each character to a special number.
print("
--> Step 4: Creating character vocabulary.")

# Get all the unique characters in the training text
all_text = "".join(train_ds["text"])
characters = sorted(list(set(all_text)))

# The StringLookup layer does the heavy lifting of creating the mapping.
char_to_num = layers.StringLookup(vocabulary=list(characters), mask_token=None)
num_to_char = layers.StringLookup(vocabulary=char_to_num.get_vocabulary(), mask_token=None, invert=True)

print(f"   - Vocabulary size: {char_to_num.vocabulary_size()}")

print(f"   - Vocabulary: {''.join(char_to_num.get_vocabulary())}")

# ==============================================================================
# Step 5: The Preprocessing Pipeline (tf.data)
# ==============================================================================
# This is our data assembly line. It will convert image paths and labels into
# ready-to-use tensors for our model.
print("
--> Step 5: Creating TensorFlow data pipeline.")

# Constant values for our images and batches
IMG_HEIGHT = 50
IMG_WIDTH = 200
BATCH_SIZE = 128
def encode_for_tf(example):
# 1. Read the image file from its path
    img = tf.io.read_file(example['image_path'])
    # 2. Decode it as a grayscale PNG (1 channel)
    img = tf.io.decode_png(img, channels=1)
    # 3. Convert pixels to the [0, 1] range
    img = tf.image.convert_image_dtype(img, tf.float32)
    # 4. Resize to a consistent size
img = tf.image.resize(img, [IMG_HEIGHT, IMG_WIDTH])
    # 5. Transpose the image! This is a key step for CRNNs.
    #    We want the RNN to see a sequence of vertical slices of the image.
    img = tf.transpose(img, perm=[1, 0, 2])
# 6. Convert the text label into a sequence of numbers according to our vocabulary
label_str = tf.cast(example['text'], tf.string)
label = char_to_num(tf.strings.unicode_split(label_str, input_encoding="UTF-8"))
return {"image": img, "label": label}

# Convert our Hugging Face Datasets to TensorFlow Datasets
# This pipeline will shuffle, batch, and prefetch data for maximum efficiency.
tf_train_dataset = train_ds.to_tf_dataset(
    columns=['image_path', 'text'],
shuffle=True,
batch_size=BATCH_SIZE
).map(encode_for_tf, num_parallel_calls=tf.data.AUTOTUNE).prefetch(tf.data.AUTOTUNE)

tf_val_dataset = val_ds.to_tf_dataset(
    columns=['image_path', 'text'],
    shuffle=False,
    batch_size=BATCH_SIZE
).map(encode_for_tf, num_parallel_calls=tf.data.AUTOTUNE).prefetch(tf.data.AUTOTUNE)

print("--> Data pipeline is built!")

# ===============================================================================
# Step 6: The Model Architecture (The Brains of the Operation)
# ===============================================================================
# Here's the CRNN model itself. It consists of three sections:
# 1. CNN (Convolutional) layers to "see" features in the image.
# 2. RNN (Recurrent) layers to "read" the sequence of features.
# 3. A final layer for CTC loss computation.
print("
--> Step 6: Building the CRNN model.")
    
class CTCLayer(layers.Layer):
    """A custom Keras layer to calculate the CTC loss."""
    def __init__(self, name=None):
        super().__init__(name=name)
        self.loss_fn = keras.backend.ctc_batch_cost
    
    def call(self, y_true, y_pred):
batch_len = tf.cast(tf.shape(y_true)[0], dtype="int64")
        input_length = tf.cast(tf.shape(y_pred)[1], dtype="int64")
        label_length = tf.cast(tf.shape(y_true)[1], dtype="int64")
        input_length = input_length * tf.ones(shape=(batch_len, 1), dtype="int64")
        label_length = label_length * tf.ones(shape=(batch_len, 1), dtype="int64")
loss = self.loss_fn(y_true, y_pred, input_length, label_length)
        self.add_loss(loss)
        return y_pred

def build_model():
    # Inputs to the model
    input_img = layers.Input(shape=(IMG_WIDTH, IMG_HEIGHT, 1), name="image", dtype="float32")
    labels = layers.Input(name="label", shape=(None,), dtype="float32")

# Part 1: The CNN to extract visual features
    x = layers.Conv2D(32, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(input_img)
    x = layers.MaxPooling2D((2, 2))(x)
    x = layers.Conv2D(64, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(x)
    x = layers.MaxPooling2D((2, 2))(x)

    # Reshape the output of the CNN to be a sequence for the RNN
new_shape = ((IMG_WIDTH // 4), (IMG_HEIGHT // 4) * 64)
    x = layers.Reshape(target_shape=new_shape)(x)
    x = layers.Dense(64, activation="relu")(x)
    x = layers.Dropout(0.2)(x)

    # Part 2: The RNN to read the sequence
    x = layers.Bidirectional(layers.LSTM(128, return_sequences=True, dropout=0.25))(x)
    x = layers.Bidirectional(layers.LSTM(64, return_sequences=True, dropout=0.25))(x)

# Part 3: The output layer
    # The number of units is our vocabulary size + 1 (for the special CTC blank token)
    x = layers.Dense(char_to_num.vocabulary_size() + 1, activation="softmax", name="dense_output")(x)

    # Add the CTC loss calculation as a layer
    output = CTCLayer(name="ctc_loss")(labels, x)

    # Create the final model
model = keras.models.Model(inputs=[input_img, labels], outputs=output, name="captcha_ocr_model")
    
    # Compile the model with the Adam optimizer
    model.compile(optimizer=keras.optimizers.Adam())
    return model

# Let's build it and see a summary
model = build_model()
model.summary()


# ==============================================================================
# Step 7: Train the Model!
# ==============================================================================
# This is the moment we've been waiting for. Let's start training.
print("
--> Step 7: Starting training.")

EPOCHS = 20 # You can increase this for best results

# This callback will save the best version of our model to your Google Drive
# It monitors the validation loss and only saves when it improves.
checkpoint_path = "/content/drive/My Drive/captcha_model_best.h5"
model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
    filepath=checkpoint_path,
    save_weights_only=False,
    monitor='val_loss',
    mode='min',
    save_best_only=True,
    verbose=1
)

# This callback will cause early stopping if the model doesn't improve early_stopping_callback = tf.keras.callbacks.EarlyStopping(    monitor="val_loss", patience=5, restore_best_weights=True, verbose=1)

# Let's start!
history = model.fit(    tf_train_dataset,    validation_data=tf_val_dataset,    epochs=EPOCHS,    callbacks=[model_checkpoint_callback, early_stopping_callback],)

print("
--> Training complete! Best model saved to your Google Drive.")

# =============================================================================#
# Step 8: Check the Results (Inference)#
# =============================================================================
# Now that we have a trained model, let's attempt to check how well it does.
print("
--> Step 8: Checking some predictions.")

# First, we construct a prediction-only model from our trained model.
# This model takes an image and outputs the raw predictions.
prediction_model = keras.models.Model(
    model.get_layer(name="image").input, model.get_layer(name="dense_output").output
)

# A helper function to decode the raw predictions into human-readable text
def decode_batch_predictions(pred):
    input_len = np.ones(pred.shape[0]) * pred.shape[1]
# We perform a greedy search for simplicity. Beam search can provide more accurate results.
    results = keras.backend.ctc_decode(pred, input_length=input_len, greedy=True)[0][0]
    output_text = []
    for res in results:
        # The [UNK] token is from our StringLookup layer, we can safely delete it.
res = tf.strings.reduce_join(num_to_char(res)).numpy().decode("utf-8").replace("[UNK]", "")
        output_text.append(res)
    return output_text

# Let's take one batch from our validation set and examine the predictions
for batch in tf_val_dataset.take(1):
    batch_images = batch["image"]
    batch_labels = batch["label"]

    preds = prediction_model.predict(batch_images)
    pred_texts = decode_batch_predictions(preds)

orig_texts = []
    for label in batch_labels:
        label = tf.strings.reduce_join(num_to_char(label)).numpy().decode("utf-8").replace("[UNK]", "")
        orig_texts.append(label)

    for i in range(min(10, len(pred_texts))):
        # Show the first 10 predictions
        print(f"  Original: {orig_texts[i]:<10} | Predicted: {pred_texts[i]}")
```

Good Luck!