IDL-WDS OCR Evaluation Dataset
Dataset Description
This dataset is a carefully curated subset of the original pixparse/idl-wds dataset, specifically designed for OCR evaluation and benchmarking.
Dataset Summary
- Source Dataset: pixparse/idl-wds - Industry Documents Library (IDL)
- Purpose: OCR evaluation on single-page documents
- Sample Count: 1,000 carefully selected single-page documents
- Selection Criteria: Only documents with exactly 1 page in their JSON metadata
- Format: Organized folder structure with paired image and ground truth data
Key Features
- Single-Page Focus: All documents contain exactly one page, eliminating multi-page complexity for OCR evaluation
- High-Quality Ground Truth: Each sample includes detailed OCR annotations with bounding boxes, polygons, and confidence scores
- Standardized Format: Consistent file structure across all samples
- Ready for Evaluation: Pre-processed and organized for immediate use in OCR benchmarking
Dataset Structure
File Organization
Each sample is stored in its own folder named by the document key:
document_key_1/
├── image.tif # Document image in TIFF format
└── data.json # OCR ground truth annotations
document_key_2/
├── image.tif
└── data.json
Data Format
Image Files (image.tif)
- Format: TIFF (Tagged Image File Format)
- Content: Single-page document images
- Source: Original document pages from the IDL collection
Ground Truth Files (data.json)
The JSON schema follows the original IDL-WDS format:
{
"pages": [
{
"text": [
"Line 1 of text",
"Line 2 of text",
"..."
],
"bbox": [
[left, top, width, height],
[left, top, width, height],
"..."
],
"poly": [
[
{"X": x1, "Y": y1},
{"X": x2, "Y": y2},
{"X": x3, "Y": y3},
{"X": x4, "Y": y4}
],
"..."
],
"score": [
confidence_score_1,
confidence_score_2,
"..."
]
}
]
}
Schema Details
text: Array of text lines in reading orderbbox: Bounding boxes in[left, top, width, height]format (normalized coordinates 0-1)poly: Polygon coordinates for each text line (4 corner points)score: Confidence scores from Amazon Textract OCR (0-1 range)- Coordinates: All spatial coordinates are normalized relative to page dimensions
Usage
Loading the Dataset
import json
import os
from PIL import Image
def load_sample(sample_folder):
"""Load a single sample from the dataset"""
image_path = os.path.join(sample_folder, "image.tif")
json_path = os.path.join(sample_folder, "data.json")
# Load image
image = Image.open(image_path)
# Load ground truth
with open(json_path, 'r', encoding='utf-8') as f:
ground_truth = json.load(f)
return image, ground_truth
# Example usage
base_dir = "idl_wds_extracted"
sample_folders = [f for f in os.listdir(base_dir)
if os.path.isdir(os.path.join(base_dir, f))]
# Load first sample
image, gt = load_sample(os.path.join(base_dir, sample_folders[0]))
print(f"Image size: {image.size}")
print(f"Number of text lines: {len(gt['pages'][0]['text'])}")
Dataset Statistics
- Total Samples: 1,000 single-page documents
- Source Documents: Filtered from ~19M pages in original IDL dataset
- Document Types: Legal documents, internal communications, reports, and other industry documents
- Text Languages: Primarily English
- Time Period: Historical industry documents (various decades)
Licensing and Usage
This dataset inherits the licensing terms from the original IDL dataset:
- License: IDL-train license (see original dataset for full terms)
- Attribution: Please cite the original IDL and IDL-WDS datasets
Citation
If you use this dataset, please cite the original work:
@dataset{idl_wds_2023,
title={Industry Documents Library - WebDataset Format},
author={Pablo Montalvo and Ross Wightman},
url={https://huggingface.co/datasets/pixparse/idl-wds},
year={2023}
}
Quality and Characteristics
Selection Process
- Documents were filtered to include only those with exactly 1 page
- Multi-page documents were excluded to ensure consistency
- All samples verified to have both image and JSON ground truth data
Ground Truth Quality
- OCR annotations generated using Amazon Textract
- Confidence scores provided for quality assessment
- Reading order preserved through columnar detection heuristics
- Bounding boxes and polygons for spatial understanding
Recommended Use Cases
- OCR model evaluation and benchmarking
- Text detection algorithm testing
- Document layout analysis research
- Reading order evaluation
- OCR confidence score analysis
Data Limitations
- Historical Bias: Documents reflect historical industry perspectives
- OCR Quality: Ground truth quality depends on Amazon Textract performance
- Document Variety: Limited to industry document types from IDL collection
- Single Page Only: Multi-page document scenarios not covered
- Language: Primarily English language documents
Contact and Support
- Original Dataset: pixparse/idl-wds
- IDL Contact: Kate Tasker, UCSF (kate.tasker@ucsf.edu)
- Technical Contact: Pablo Montalvo (pablo@huggingface.co)
For questions about this specific subset, please refer to the original dataset maintainers.