Datasets:
File size: 5,222 Bytes
ec69c72 b2ff3a0 ec69c72 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 |
---
license: apache-2.0
task_categories:
- image-to-text
language:
- en
pretty_name: dataset-BotDetect-CAPTCHA-Generator
size_categories:
- 100M<n<1B
---
π¦ CAPTCHA Datasets (style0βstyle59)
====================================
This repository contains CAPTCHA datasets for training CRNN+CTC models. Each archive `dataset_*.tar.gz` includes **60 styles** (from [BotDetect Captcha](https://captcha.com/)), structured as folders `style0` through `style59`. Each style contains _N_ images depending on the archive name.
* * *
ποΈ Available Archives
----------------------
* `dataset_500.tar.gz` β 500 images per style (β 30,000 total)
* `dataset_1000.tar.gz` β 1,000 images per style (β 60,000 total)
* `dataset_5000.tar.gz` β 5,000 images per style (β 300,000 total)
* `dataset_10000.tar.gz` β 10,000 images per style (β 600,000 total)
* `dataset_20000.tar.gz` β 20,000 images per style (β 1,200,000 total)
* `dataset_1000_rand.tar.gz` β randomized variant with 1,000 images per style
**Naming convention:** `dataset_{N}.tar.gz` means each `styleX` folder holds exactly `N` PNG images.
* * *
π Directory Layout
-------------------
/path/to/dataset
βββ style0/
β βββ A1B2C.png
β βββ 9Z7QK.png
β βββ ...
βββ style1/
β βββ K9NO2.png
β βββ ...
βββ ...
βββ style59/
* **Filename** = ground-truth label (5 uppercase alphanumeric chars), e.g. `K9NO2.png`.
* **Image size** = `50Γ250` pixels (H=50, W=250), grayscale PNG.
* **Label rule** = regex `^[A-Z0-9]{5}$` (exactly 5 chars, uppercase & digits).
* * *
π§° Extraction
-------------
# example: extract into /workspace/dataset_1000
mkdir -p /workspace/dataset_1000
tar -xvzf dataset_1000.tar.gz -C /workspace/dataset_1000
* * *
β
Quick File Counts
-------------------
# total PNG files (depth 2 to only count inside style folders)
find /workspace/dataset_1000 -maxdepth 2 -type f -name '*.png' | wc -l
# per-style counts without a for-loop (prints "count styleX")
find /workspace/dataset_1000 -mindepth 2 -maxdepth 2 -type f -name '*.png' \
| awk -F/ '{print $(NF-2)}' | sort | uniq -c | sort -k2
Expected totals:
* `dataset_500` β 500 Γ 60 = 30,000 files
* `dataset_1000` β 60,000 files
* `dataset_5000` β 300,000 files
* `dataset_10000` β 600,000 files
* `dataset_20000` β 1,200,000 files
* * *
π§ͺ Label Validation
-------------------
# list filenames that violate the strict 5-char uppercase/digit rule
find /workspace/dataset_1000 -type f -name '*.png' \
| awk -F/ '{print $NF}' | sed 's/\.png$//' \
| grep -vE '^[A-Z0-9]{5}$' | head
CSV report via Python (pandas):
import os, re
import pandas as pd
from glob import glob
root = "/workspace/dataset_1000"
rows = []
for s in range(60):
for p in glob(os.path.join(root, f"style{s}", "*.png")):
rows.append({"style": f"style{s}", "filepath": p, "label": os.path.basename(p)[:-4]})
df = pd.DataFrame(rows)
bad = df[~df["label"].str.match(r"^[A-Z0-9]{5}$", na=True)]
print("Invalid labels:", len(bad))
if len(bad):
bad.to_csv("invalid_labels.csv", index=False)
* * *
π§© Example: Load to DataFrame
-----------------------------
import os
from glob import glob
import pandas as pd
def load_dataset(root_dir):
data = []
for style_id in range(60):
folder = os.path.join(root_dir, f"style{style_id}")
for path in glob(os.path.join(folder, "*.png")):
label = os.path.splitext(os.path.basename(path))[0]
data.append((path, label, f"style{style_id}"))
df = pd.DataFrame(data, columns=["filepath", "label", "style"])
# enforce strict label rule
df = df[df["label"].str.match(r"^[A-Z0-9]{5}$")]
return df
df = load_dataset("/workspace/dataset_1000")
print(df.head(), len(df))
* * *
π Merge Datasets (no loop)
---------------------------
**Add new files without overwriting existing ones**:
rsync -av \
--ignore-existing \
--include='style[0-5][0-9]/' \
--include='style[0-5][0-9]/*.png' \
--exclude='*' \
/workspace/dataset_10000/ /workspace/dataset_20000/
**Overwrite only if source is newer**:
rsync -av --update \
--include='style[0-5][0-9]/' \
--include='style[0-5][0-9]/*.png' \
--exclude='*' \
/workspace/dataset_10000/ /workspace/dataset_20000/
* * *
π Checksums
------------
Optional: keep SHA256 for integrity.
sha256sum dataset_1000.tar.gz > dataset_1000.tar.gz.sha256
sha256sum -c dataset_1000.tar.gz.sha256
* * *
π Notes
--------
* All images prepared for CRNN+CTC models with input `(H, W) = (50, 250)`, grayscale.
* Character distribution: digits 0β9 and letters AβZ (uppercase).
* Each style emulates a distinct visual variant (font/noise/warp) from BotDetect.
* * *
π Contact
----------
For questions, dataset issues, or custom subsets, please open an issue in this repository. |