Datasets:
File size: 1,587 Bytes
6a268d5 2b49742 6a268d5 2b49742 902fec2 6a268d5 2b49742 6a268d5 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 | ---
license: mit
task_categories:
- table-question-answering
- image-to-text
tags:
- table-extraction
- benchmark
- fintabnet
- document-ai
- docld
pretty_name: DocLD FinTabNet Benchmark
size_categories:
- n<1K
---
# DocLD FinTabNet Benchmark Results
Benchmark results for [DocLD](https://docld.com) table extraction on the
[FinTabNet](https://paperswithcode.com/dataset/fintabnet) dataset.
## Results Summary
| Metric | Value |
|--------|-------|
| **Mean Accuracy** | 82.1% |
| **Median** | 83.2% |
| **P25 / P75** | 73.3% / 97.4% |
| **Min / Max** | 22.7% / 100.0% |
| **Scored Samples** | 500 |
| **Total Samples** | 500 |
## Methodology
- **Dataset**: [FinTabNet_OTSL](https://huggingface.co/datasets/docling-project/FinTabNet_OTSL) — 500 samples from the test split
- **Extraction**: DocLD vision-based table extraction
- **Scoring**: Needleman-Wunsch hierarchical alignment (same as [RD-TableBench](https://github.com/reductoai/rd-tablebench))
- **Output**: HTML tables with rowspan/colspan for merged cells
## Comparison
| Provider | FinTabNet Accuracy |
|----------|-------------------|
| **DocLD** | **82.1%** |
| GTE (IBM) | ~78% |
| TATR (Microsoft) | ~65% |
## Files
- `results.json` — Full benchmark results with per-sample scores
- `predictions/` — HTML predictions for each sample
- `charts/` — Visualization PNGs
## Links
- [DocLD](https://docld.com)
- [Blog Post](https://docld.com/blog/docld-fintabnet)
- [Benchmark Code](https://github.com/Doc-LD/fintabnet-bench)
- [RD-TableBench Results](https://docld.com/blog/docld-tablebench)
|