TokenShrink-OCR / README.md
LukB4UJump's picture
Update README.md
9155957 verified
metadata
license: cc-by-nc-4.0
language:
  - en
tags:
  - ocr
  - text-recognition
  - scene-text
  - image-to-text
  - 100k<n<1M

TokenShrink-OCR Dataset

Introduction

This is a large-scale dataset containing 120,000 images, designed for Optical Character Recognition (OCR) tasks.

All images are derived from the ImageNet database, providing a challenging collection of text against complex backgrounds, varied lighting conditions, and diverse fonts.

Dataset Structure

All image files are stored in a sharded structure.

All data (train, validation, test) has been split into small folders, each containing 1,000 files.

The directory structure in the remote repository is as follows:

|-- train/
|   |-- 000/
|   |   |-- image_0000001.jpg
|   |   |-- image_0000002.jpg
|   |   `-- ... (1,000 files)
|   |-- 001/
|   |   |-- image_0001001.jpg
|   |   `-- ... (1,000 files)
|   |-- 002/
|   |   `-- ...
|   `-- ... (e.g., up to "119")
|
|-- validation/
|   |-- 000/
|   |   |-- image_val_00001.jpg
|   |   `-- ... (1,000 files)
|   |-- 001/
|   |   `-- ...
|   `-- ...
|
`-- test/
    |-- 000/
    |   |-- image_test_00001.jpg
    |   `-- ... (1,000 files)
    |-- 001/
    |   `-- ...
    `-- ...

How to Use

You can easily load all sharded data using the datasets library, the imagefolder loader, and a glob (wildcard) pattern.

Install Dependencies

pip install datasets


The datasets library will automatically find all files matching the */*.jpg pattern and merge them into a single dataset.

```python
from datasets import load_dataset

REPO_ID = "LukB4UJump/TokenShrink-OCR"

IMAGE_EXTENSION = "jpg" 

data_files = {
    "train": f"{REPO_ID}::train/*/*.{IMAGE_EXTENSION}",       # Matches train/000/*.jpg, train/001/*.jpg ...
    "validation": f"{REPO_ID}::validation/*/*.{IMAGE_EXTENSION}",
    "test": f"{REPO_ID}::test/*/*.{IMAGE_EXTENSION}"
}

# Use the "imagefolder" loader
# streaming=True allows you to access the data without downloading all 120k images, saving disk space
dataset = load_dataset(
    "imagefolder",
    data_files=data_files,
    streaming=True  # Recommended for large datasets
)

# --- If you want to download all data at once (requires sufficient disk space) ---
# dataset = load_dataset(
#     "imagefolder",
#     data_files=data_files,
#     streaming=False
# )

print(dataset)