olmocr-pre-rendered / README.md
shhdwi's picture
Upload README.md with huggingface_hub
40b9671 verified
metadata
license: odc-by
tags:
  - ocr
  - benchmark
  - pdf
  - document-understanding
language:
  - en
pretty_name: olmOCR-bench Pre-Rendered
size_categories:
  - 1K<n<10K
configs:
  - config_name: arxiv_math
    data_files:
      - split: test
        path: images/arxiv_math/**
  - config_name: headers_footers
    data_files:
      - split: test
        path: images/headers_footers/**
  - config_name: long_tiny_text
    data_files:
      - split: test
        path: images/long_tiny_text/**
  - config_name: multi_column
    data_files:
      - split: test
        path: images/multi_column/**
  - config_name: old_scans
    data_files:
      - split: test
        path: images/old_scans/**
  - config_name: old_scans_math
    data_files:
      - split: test
        path: images/old_scans_math/**
  - config_name: tables
    data_files:
      - split: test
        path: images/tables/**

olmOCR-bench Pre-Rendered

Pre-rendered PNG images of the olmOCR-bench benchmark dataset, ready for zero-setup evaluation of any OCR / vision model.

What This Is

The official olmOCR benchmark requires downloading 1,403 PDFs locally and rendering each page to a PNG image before sending it to a model. Every benchmark runner in the official repo does this same rendering step internally — see olmocr/data/renderpdf.py::render_pdf_to_base64png().

This dataset eliminates that setup entirely by hosting the pre-rendered images directly. The PNGs are rendered at target_longest_image_dim=2048 — the same default resolution used by the official olmOCR render_pdf_to_base64png() function and by the GPT-4o, Claude, and Gemini benchmark runners.

All files are accessible via direct URL, so you can evaluate any model by just pointing at these URLs — no local downloads, no PDF rendering tools, no dataset cloning.

Dataset Structure

The dataset has 7 subsets (one per benchmark category), each with a test split:

Subset PDFs Tests Test Types
arxiv_math 522 2,927 math
headers_footers 266 753 absent
long_tiny_text 62 442 present
multi_column 231 884 order
old_scans 98 526 present, absent, order
old_scans_math 36 458 math
tables 188 1,020 table

Loading with datasets

from datasets import load_dataset

ds = load_dataset("shhdwi/olmocr-pre-rendered", "arxiv_math", split="test")
print(ds[0])  # {'image': <PIL.Image>, 'pdf_stem': '...', 'category': '...', ...}

Contents

Directory Contents Count
images/ Pre-rendered PNG images (page 1, 2048px longest dim) with metadata.jsonl per category 1,403 images
ground_truth/ JSONL test case files (from allenai/olmOCR-bench) 7,010 tests
predictions/ Published model prediction caches 1,403 .md files

Rendering Details

Each PDF page is rendered to PNG matching the official olmOCR benchmark process:

  • Resolution: target_longest_image_dim = 2048 (longest side scaled to 2048px, aspect ratio preserved)
  • Renderer: PyMuPDF (same pixel output as pdftoppm used in the official repo)
  • Pages: Page 1 only (the benchmark tests only page 1 of each PDF)
  • Naming: {pdf_stem}_pg1.png

This matches what the official benchmark runners do internally:

  • run_chatgpt.py: render_pdf_to_base64png(pdf_path, target_longest_image_dim=2048)
  • run_claude.py: render_pdf_to_base64png(pdf_path, target_longest_image_dim=2048)
  • run_gemini.py: render_pdf_to_base64png(pdf_path, target_longest_image_dim=2048)
  • run_server.py: render_pdf_to_base64png(pdf_path, target_longest_image_dim=1024) (for smaller models)

Quick Start

Evaluate any model with zero setup:

pip install litellm httpx

# Run on any litellm-supported model
python run_bench.py --model gpt-4o
python run_bench.py --model claude-sonnet-4-20250514
python run_bench.py --model gemini/gemini-2.0-flash

# Run specific categories only
python run_bench.py --model gpt-4o --categories arxiv_math headers_footers

# Evaluate published predictions (no API key needed)
python run_bench.py --evaluate nanonets-optimal-v4

Direct File Access

Every file is accessible via URL:

https://huggingface.co/datasets/shhdwi/olmocr-pre-rendered/resolve/main/images/arxiv_math/2503.05390_pg14_pg1.png
https://huggingface.co/datasets/shhdwi/olmocr-pre-rendered/resolve/main/ground_truth/arxiv_math.jsonl

Attribution

Based on olmOCR-bench by Allen AI (paper). Licensed ODC-BY-1.0.