piushorn commited on
Commit
75fdcee
·
verified ·
1 Parent(s): 489c121

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +118 -0
README.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - image-to-text
5
+ - document-question-answering
6
+ language:
7
+ - en
8
+ tags:
9
+ - pdf-parsing
10
+ - ocr
11
+ - benchmark
12
+ - mathematical-formulas
13
+ - tables
14
+ - llm-as-a-judge
15
+ size_categories:
16
+ - n<1K
17
+ configs:
18
+ - config_name: 2026-q1-tables-only
19
+ data_files:
20
+ - split: test
21
+ path: 2026-q1-tables-only/ground_truth/*.json
22
+ - config_name: 2026-q1-formulas-only
23
+ data_files:
24
+ - split: test
25
+ path: 2026-q1-formulas-only/ground_truth/*.json
26
+ ---
27
+
28
+ # PDF Parse Bench
29
+
30
+ Benchmark for evaluating how effectively PDF parsing solutions extract **mathematical formulas** and **tables** from documents.
31
+
32
+ We generate synthetic PDFs with diverse formatting scenarios, parse them with different parsers, and score the extracted content using **LLM-as-a-Judge**. This semantic evaluation approach [substantially outperforms traditional metrics](https://github.com/phorn1/pdf-parse-bench#why-llm-as-a-judge) in agreement with human judgment.
33
+
34
+ ## Leaderboard (2026-Q1)
35
+
36
+ Results are based on two benchmark datasets, each containing 100 synthetic PDFs:
37
+
38
+ | Parser | Tables | Formulas |
39
+ |--------|--------|----------|
40
+ | [Gemini 3 Flash](https://deepmind.google/models/gemini/flash/) | 9.50 | 9.79 |
41
+ | [LightOnOCR-2-1B](https://huggingface.co/lightonai/LightOnOCR-2-1B) | 9.08 | 9.57 |
42
+ | [Mistral OCR](https://mistral.ai/) | 8.89 | 9.48 |
43
+ | [dots.ocr](https://github.com/rednote-hilab/dots.ocr) | 8.73 | 9.55 |
44
+ | [Mathpix](https://mathpix.com/) | 8.53 | 9.66 |
45
+ | [Chandra](https://huggingface.co/datalab-to/chandra) | 8.43 | 9.45 |
46
+ | [Qwen3-VL-235B](https://github.com/QwenLM/Qwen3-VL) | 8.43 | 9.84 |
47
+ | [MonkeyOCR-pro-3B](https://github.com/Yuliang-Liu/MonkeyOCR) | 8.39 | 9.50 |
48
+ | [GLM-4.5V](https://github.com/zai-org/GLM-V) | 7.98 | 9.37 |
49
+ | [GPT-5 mini](https://openai.com/) | 7.14 | 5.57 |
50
+ | [Claude Sonnet 4.6](https://docs.anthropic.com/en/docs/about-claude/models) | 7.02 | 8.50 |
51
+ | [Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) | 6.92 | 9.21 |
52
+ | [PP-StructureV3](https://github.com/PaddlePaddle/PaddleOCR) | 6.86 | 9.59 |
53
+ | [Gemini 2.5 Flash](https://deepmind.google/models/gemini/flash/) | 6.85 | 6.51 |
54
+ | [MinerU2.5](https://mineru.net/) | 6.49 | 9.32 |
55
+ | [GPT-5 nano](https://openai.com/) | 6.48 | 4.78 |
56
+ | [DeepSeek-OCR](https://github.com/deepseek-ai/DeepSeek-OCR) | 5.75 | 8.97 |
57
+ | [PaddleOCR-VL](https://huggingface.co/PaddlePaddle/PaddleOCR-VL-1.5) | 5.39 | 8.47 |
58
+ | [PyMuPDF4LLM](https://github.com/pymupdf/PyMuPDF4LLM) | 5.25 | 4.53 |
59
+ | [GOT-OCR2.0](https://github.com/Ucas-HaoranWei/GOT-OCR2.0) | 5.13 | 8.01 |
60
+ | [olmOCR-2-7B](https://github.com/allenai/olmocr) | 4.05 | 9.35 |
61
+ | [GROBID](https://github.com/kermitt2/grobid) | 2.10 | 7.01 |
62
+
63
+ All scores are **LLM-as-a-Judge** ratings on a 0–10 scale, judged by Gemini 3 Flash via OpenRouter.
64
+
65
+ ## Datasets
66
+
67
+ - **`2026-q1-tables-only`** — 100 PDFs with 451 tables (simple, moderate, complex)
68
+ - **`2026-q1-formulas-only`** — 100 PDFs with 1413 inline + 657 display-mode mathematical formulas
69
+
70
+ PDFs are generated synthetically using LaTeX with randomized parameters (document class, fonts, margins, column layout, line spacing). Since PDFs are generated from LaTeX source, ground truth is obtained automatically.
71
+
72
+ ## How to Evaluate Your Parser
73
+
74
+ ```bash
75
+ pip install pdf-parse-bench
76
+ ```
77
+
78
+ See the full evaluation guide at **[github.com/phorn1/pdf-parse-bench](https://github.com/phorn1/pdf-parse-bench)**.
79
+
80
+ ## Why LLM-as-a-Judge?
81
+
82
+ Rule-based metrics correlate poorly with human judgment. We validated this in two human annotation studies:
83
+
84
+ - **[formula-metric-study](https://github.com/phorn1/formula-metric-study)** — 750 human ratings: text metrics r = 0.01, CDM r = 0.31, LLM judges r = 0.74–0.82
85
+ - **[table-metric-study](https://github.com/phorn1/table-metric-study)** — 1,500+ human ratings: rule-based (TEDS, GriTS) top at r = 0.70, LLM judges r = 0.94
86
+
87
+ ## Citation
88
+
89
+ ```bibtex
90
+ @misc{horn2025formulabench,
91
+ title = {Benchmarking Document Parsers on Mathematical Formula Extraction from PDFs},
92
+ author = {Horn, Pius and Keuper, Janis},
93
+ year = {2025},
94
+ eprint = {2511.10390},
95
+ archivePrefix = {arXiv},
96
+ primaryClass = {cs.CV},
97
+ url = {https://arxiv.org/abs/2512.09874}
98
+ }
99
+
100
+ @misc{horn2026tablebench,
101
+ title = {Benchmarking PDF Parsers on Table Extraction with LLM-based Semantic Evaluation},
102
+ author = {Horn, Pius and Keuper, Janis},
103
+ year = {2026},
104
+ eprint = {2603.18652},
105
+ archivePrefix = {arXiv},
106
+ primaryClass = {cs.CV},
107
+ url = {https://arxiv.org/abs/2603.18652}
108
+ }
109
+ ```
110
+
111
+ ## Acknowledgments
112
+
113
+ This work has been supported by the German Federal Ministry of Research, Technology and Space (BMFTR) in the program "Forschung an Fachhochschulen in Kooperation mit Unternehmen (FH-Kooperativ)" within the joint project **LLMpraxis** under grant 13FH622KX2.
114
+
115
+ <p align="center">
116
+ <img src="https://raw.githubusercontent.com/phorn1/pdf-parse-bench/main/assets/BMFTR_logo.png" alt="BMFTR" width="150" />
117
+ <img src="https://raw.githubusercontent.com/phorn1/pdf-parse-bench/main/assets/HAW_logo.png" alt="HAW" width="150" />
118
+ </p>