Datasets:
metadata
license: mit
task_categories:
- table-question-answering
- image-to-text
tags:
- table-extraction
- benchmark
- fintabnet
- document-ai
- docld
pretty_name: DocLD FinTabNet Benchmark
size_categories:
- n<1K
DocLD FinTabNet Benchmark Results
Benchmark results for DocLD table extraction on the FinTabNet dataset.
Results Summary
| Metric | Value |
|---|---|
| Mean Accuracy | 82.1% |
| Median | 83.2% |
| P25 / P75 | 73.3% / 97.4% |
| Min / Max | 22.7% / 100.0% |
| Scored Samples | 500 |
| Total Samples | 500 |
Methodology
- Dataset: FinTabNet_OTSL — 500 samples from the test split
- Extraction: DocLD vision-based table extraction
- Scoring: Needleman-Wunsch hierarchical alignment (same as RD-TableBench)
- Output: HTML tables with rowspan/colspan for merged cells
Comparison
| Provider | FinTabNet Accuracy |
|---|---|
| DocLD | 82.1% |
| GTE (IBM) | ~78% |
| TATR (Microsoft) | ~65% |
Files
results.json— Full benchmark results with per-sample scorespredictions/— HTML predictions for each samplecharts/— Visualization PNGs