metadata
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 45122565
num_examples: 20000
- name: validation
num_bytes: 674338
num_examples: 300
- name: test
num_bytes: 1129219
num_examples: 500
download_size: 37814659
dataset_size: 46926122
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
Dataset Summary
I have uploaded previously a dataset similar to this one, and that's why
this one is named with the suffix _v2. In this dataset card, we shall refer to
the previous dataset by the name of v1.
This v2 version attempts to fix the following issues:
- There were simply too many images in v1 for any model to properly run through even a single epoch.
- Consequently, maybe by bad luck, someone may train on images that do not cover all the symbols.
- v1 seems to lack of CAPTCHA images with repeated symbols, e.g.
"jj12oj".
The usage and meaning of the current v2 dataset should be intuitive (and quite independent of v1):
In [1]: from datasets import load_dataset
In [2]: dataset = load_dataset("phunc20/nj_biergarten_captcha_v2)
README.md: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 533/533 [00:00<00:00, 1.58MB/s]
train-00000-of-00001.parquet: 100%|██████████████████████████████████████████████████████████████████| 36.3M/36.3M [00:07<00:00, 2.02MB/s]
validation-00000-of-00001.parquet: 100%|███████████████████████████████████████████████████████████████| 541k/541k [00:00<00:00, 2.06MB/s]
test-00000-of-00001.parquet: 100%|█████████████████████████████████████████████████████████████████████| 931k/931k [00:00<00:00, 2.04MB/s]
Generating train split: 100%|████████████████████████████████████████████████████████████| 20000/20000 [00:00<00:00, 113382.55 examples/s]
Generating validation split: 100%|████████████████████████████████████████████████████████████| 300/300 [00:00<00:00, 45083.88 examples/s]
Generating test split: 100%|██████████████████████████████████████████████████████████████████| 500/500 [00:00<00:00, 92186.56 examples/s]
In [3]: dataset
Out[3]:
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 20000
})
validation: Dataset({
features: ['image', 'label'],
num_rows: 300
})
test: Dataset({
features: ['image', 'label'],
num_rows: 500
})
})
In [4]: dataset["test"][0]["label"]
Out[4]: '9ymyht'
In [5]: dataset["test"][0]["image"]
Out[5]: <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=140x50>
Citation Information
@ONLINE{nj_biergarten_captcha_v2,
author = "phunc20",
title = "nj_biergarten_captcha_v2",
url = "https://huggingface.co/datasets/phunc20/nj_biergarten_captcha_v2"
}