pdf-parse-bench / README.md
piushorn's picture
Upload README.md with huggingface_hub
802880b verified
metadata
license: mit
task_categories:
  - image-to-text
  - document-question-answering
language:
  - en
tags:
  - pdf-parsing
  - ocr
  - benchmark
  - mathematical-formulas
  - tables
  - llm-as-a-judge
size_categories:
  - n<1K
configs:
  - config_name: 2026-q1-tables-only
    data_files:
      - split: test
        path: 2026-q1-tables-only/test.jsonl
  - config_name: 2026-q1-formulas-only
    data_files:
      - split: test
        path: 2026-q1-formulas-only/test.jsonl

PDF Parse Bench

GitHub PyPI arXiv arXiv

Benchmark for evaluating how effectively PDF parsing solutions extract mathematical formulas and tables from documents.

We generate synthetic PDFs with diverse formatting scenarios, parse them with different parsers, and score the extracted content using LLM-as-a-Judge. This semantic evaluation approach substantially outperforms traditional metrics in agreement with human judgment.

Leaderboard (2026-Q1)

Results are based on two benchmark datasets, each containing 100 synthetic PDFs:

Parser Tables Formulas
Gemini 3 Flash 9.50 9.79
LightOnOCR-2-1B 9.08 9.57
Mistral OCR 8.89 9.48
dots.ocr 8.73 9.55
Mathpix 8.53 9.66
Chandra 8.43 9.45
Qwen3-VL-235B 8.43 9.84
MonkeyOCR-pro-3B 8.39 9.50
GLM-4.5V 7.98 9.37
GPT-5 mini 7.14 5.57
Claude Sonnet 4.6 7.02 8.50
Nanonets-OCR-s 6.92 9.21
PP-StructureV3 6.86 9.59
Gemini 2.5 Flash 6.85 6.51
MinerU2.5 6.49 9.32
GPT-5 nano 6.48 4.78
DeepSeek-OCR 5.75 8.97
PaddleOCR-VL 5.39 8.47
PyMuPDF4LLM 5.25 4.53
GOT-OCR2.0 5.13 8.01
olmOCR-2-7B 4.05 9.35
GROBID 2.10 7.01

All scores are LLM-as-a-Judge ratings on a 0–10 scale, judged by Gemini 3 Flash via OpenRouter.

Datasets

  • 2026-q1-tables-only — 100 PDFs with 451 tables (simple, moderate, complex)
  • 2026-q1-formulas-only — 100 PDFs with 1413 inline + 657 display-mode mathematical formulas

PDFs are generated synthetically using LaTeX with randomized parameters (document class, fonts, margins, column layout, line spacing). Since PDFs are generated from LaTeX source, ground truth is obtained automatically.

How to Evaluate Your Parser

pip install pdf-parse-bench

See the full evaluation guide at github.com/phorn1/pdf-parse-bench.

Why LLM-as-a-Judge?

Rule-based metrics correlate poorly with human judgment. We validated this in two human annotation studies:

  • formula-metric-study — 750 human ratings: text metrics r = 0.01, CDM r = 0.31, LLM judges r = 0.74–0.82
  • table-metric-study — 1,500+ human ratings: rule-based (TEDS, GriTS) top at r = 0.70, LLM judges r = 0.94

Citation

@misc{horn2025formulabench,
    title = {Benchmarking Document Parsers on Mathematical Formula Extraction from PDFs},
    author = {Horn, Pius and Keuper, Janis},
    year = {2025},
    eprint = {2512.09874},
    archivePrefix = {arXiv},
    primaryClass = {cs.CV},
    url = {https://arxiv.org/abs/2512.09874}
}

@misc{horn2026tablebench,
    title = {Benchmarking PDF Parsers on Table Extraction with LLM-based Semantic Evaluation},
    author = {Horn, Pius and Keuper, Janis},
    year = {2026},
    eprint = {2603.18652},
    archivePrefix = {arXiv},
    primaryClass = {cs.CV},
    url = {https://arxiv.org/abs/2603.18652}
}

Acknowledgments

This work has been supported by the German Federal Ministry of Research, Technology and Space (BMFTR) in the program "Forschung an Fachhochschulen in Kooperation mit Unternehmen (FH-Kooperativ)" within the joint project LLMpraxis under grant 13FH622KX2.

BMFTR HAW