fintabnet-bench / README.md
wolfnuker's picture
Update benchmark results (82.2% on 451 samples)
902fec2 verified
---
license: mit
task_categories:
- table-question-answering
- image-to-text
tags:
- table-extraction
- benchmark
- fintabnet
- document-ai
- docld
pretty_name: DocLD FinTabNet Benchmark
size_categories:
- n<1K
---
# DocLD FinTabNet Benchmark Results
Benchmark results for [DocLD](https://docld.com) table extraction on the
[FinTabNet](https://paperswithcode.com/dataset/fintabnet) dataset.
## Results Summary
| Metric | Value |
|--------|-------|
| **Mean Accuracy** | 82.1% |
| **Median** | 83.2% |
| **P25 / P75** | 73.3% / 97.4% |
| **Min / Max** | 22.7% / 100.0% |
| **Scored Samples** | 500 |
| **Total Samples** | 500 |
## Methodology
- **Dataset**: [FinTabNet_OTSL](https://huggingface.co/datasets/docling-project/FinTabNet_OTSL) — 500 samples from the test split
- **Extraction**: DocLD vision-based table extraction
- **Scoring**: Needleman-Wunsch hierarchical alignment (same as [RD-TableBench](https://github.com/reductoai/rd-tablebench))
- **Output**: HTML tables with rowspan/colspan for merged cells
## Comparison
| Provider | FinTabNet Accuracy |
|----------|-------------------|
| **DocLD** | **82.1%** |
| GTE (IBM) | ~78% |
| TATR (Microsoft) | ~65% |
## Files
- `results.json` — Full benchmark results with per-sample scores
- `predictions/` — HTML predictions for each sample
- `charts/` — Visualization PNGs
## Links
- [DocLD](https://docld.com)
- [Blog Post](https://docld.com/blog/docld-fintabnet)
- [Benchmark Code](https://github.com/Doc-LD/fintabnet-bench)
- [RD-TableBench Results](https://docld.com/blog/docld-tablebench)