Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
pages: list<item: struct<text: list<item: string>, bbox: list<item: list<item: double>>, poly: list<item: list<item: struct<X: double, Y: double>>>, score: list<item: double>>>
vs
total_samples: int64
failed_samples: int64
format: string
image_column: string
text_column: string
bbox_column: string
json_column: string
key_column: string
average_text_lines: double
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 559, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              pages: list<item: struct<text: list<item: string>, bbox: list<item: list<item: double>>, poly: list<item: list<item: struct<X: double, Y: double>>>, score: list<item: double>>>
              vs
              total_samples: int64
              failed_samples: int64
              format: string
              image_column: string
              text_column: string
              bbox_column: string
              json_column: string
              key_column: string
              average_text_lines: double

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

IDL-WDS OCR Evaluation Dataset

Dataset Description

This dataset is a carefully curated subset of the original pixparse/idl-wds dataset, specifically designed for OCR evaluation and benchmarking.

Dataset Summary

  • Source Dataset: pixparse/idl-wds - Industry Documents Library (IDL)
  • Purpose: OCR evaluation on single-page documents
  • Sample Count: 1,000 carefully selected single-page documents
  • Selection Criteria: Only documents with exactly 1 page in their JSON metadata
  • Format: Organized folder structure with paired image and ground truth data

Key Features

  • Single-Page Focus: All documents contain exactly one page, eliminating multi-page complexity for OCR evaluation
  • High-Quality Ground Truth: Each sample includes detailed OCR annotations with bounding boxes, polygons, and confidence scores
  • Standardized Format: Consistent file structure across all samples
  • Ready for Evaluation: Pre-processed and organized for immediate use in OCR benchmarking

Dataset Structure

File Organization

Each sample is stored in its own folder named by the document key:


document_key_1/
├── image.tif          # Document image in TIFF format
└── data.json          # OCR ground truth annotations
document_key_2/
├── image.tif
└── data.json

Data Format

Image Files (image.tif)

  • Format: TIFF (Tagged Image File Format)
  • Content: Single-page document images
  • Source: Original document pages from the IDL collection

Ground Truth Files (data.json)

The JSON schema follows the original IDL-WDS format:

{
  "pages": [
    {
      "text": [
        "Line 1 of text",
        "Line 2 of text",
        "..."
      ],
      "bbox": [
        [left, top, width, height],
        [left, top, width, height],
        "..."
      ],
      "poly": [
        [
          {"X": x1, "Y": y1}, 
          {"X": x2, "Y": y2}, 
          {"X": x3, "Y": y3}, 
          {"X": x4, "Y": y4}
        ],
        "..."
      ],
      "score": [
        confidence_score_1,
        confidence_score_2,
        "..."
      ]
    }
  ]
}

Schema Details

  • text: Array of text lines in reading order
  • bbox: Bounding boxes in [left, top, width, height] format (normalized coordinates 0-1)
  • poly: Polygon coordinates for each text line (4 corner points)
  • score: Confidence scores from Amazon Textract OCR (0-1 range)
  • Coordinates: All spatial coordinates are normalized relative to page dimensions

Usage

Loading the Dataset

import json
import os
from PIL import Image

def load_sample(sample_folder):
    """Load a single sample from the dataset"""
    image_path = os.path.join(sample_folder, "image.tif")
    json_path = os.path.join(sample_folder, "data.json")
    
    # Load image
    image = Image.open(image_path)
    
    # Load ground truth
    with open(json_path, 'r', encoding='utf-8') as f:
        ground_truth = json.load(f)
    
    return image, ground_truth

# Example usage
base_dir = "idl_wds_extracted"
sample_folders = [f for f in os.listdir(base_dir) 
                  if os.path.isdir(os.path.join(base_dir, f))]

# Load first sample
image, gt = load_sample(os.path.join(base_dir, sample_folders[0]))
print(f"Image size: {image.size}")
print(f"Number of text lines: {len(gt['pages'][0]['text'])}")

Dataset Statistics

  • Total Samples: 1,000 single-page documents
  • Source Documents: Filtered from ~19M pages in original IDL dataset
  • Document Types: Legal documents, internal communications, reports, and other industry documents
  • Text Languages: Primarily English
  • Time Period: Historical industry documents (various decades)

Licensing and Usage

This dataset inherits the licensing terms from the original IDL dataset:

  • License: IDL-train license (see original dataset for full terms)
  • Attribution: Please cite the original IDL and IDL-WDS datasets

Citation

If you use this dataset, please cite the original work:

@dataset{idl_wds_2023,
  title={Industry Documents Library - WebDataset Format},
  author={Pablo Montalvo and Ross Wightman},
  url={https://huggingface.co/datasets/pixparse/idl-wds},
  year={2023}
}

Quality and Characteristics

Selection Process

  • Documents were filtered to include only those with exactly 1 page
  • Multi-page documents were excluded to ensure consistency
  • All samples verified to have both image and JSON ground truth data

Ground Truth Quality

  • OCR annotations generated using Amazon Textract
  • Confidence scores provided for quality assessment
  • Reading order preserved through columnar detection heuristics
  • Bounding boxes and polygons for spatial understanding

Recommended Use Cases

  • OCR model evaluation and benchmarking
  • Text detection algorithm testing
  • Document layout analysis research
  • Reading order evaluation
  • OCR confidence score analysis

Data Limitations

  • Historical Bias: Documents reflect historical industry perspectives
  • OCR Quality: Ground truth quality depends on Amazon Textract performance
  • Document Variety: Limited to industry document types from IDL collection
  • Single Page Only: Multi-page document scenarios not covered
  • Language: Primarily English language documents

Contact and Support

For questions about this specific subset, please refer to the original dataset maintainers.

Downloads last month
105