fintabnet-bench / README.md
wolfnuker's picture
Upload README.md with huggingface_hub
96542ea verified
|
raw
history blame
1.61 kB
metadata
license: mit
task_categories:
  - table-question-answering
  - image-to-text
tags:
  - table-extraction
  - benchmark
  - fintabnet
  - document-ai
  - docld
pretty_name: DocLD FinTabNet Benchmark
size_categories:
  - n<1K

DocLD FinTabNet Benchmark Results

Benchmark results for DocLD table extraction on the FinTabNet dataset.

Results Summary

Metric Value
Mean Accuracy 82.2%
Median 82.3%
P25 / P75 72.2% / 96.8%
Min / Max 41.4% / 100.0%
Scored Samples 451
Total Samples 1000

Methodology

  • Dataset: FinTabNet_OTSL — 1000 samples from the test split
  • Extraction: DocLD agentic table extraction (VLM-based, gpt-5-mini)
  • Scoring: Needleman-Wunsch hierarchical alignment (same as RD-TableBench)
  • Output: HTML tables with rowspan/colspan for merged cells

Comparison

Provider FinTabNet Accuracy
DocLD 82.2%
GTE (IBM) ~78%
TATR (Microsoft) ~65%

Files

  • results.json — Full benchmark results with per-sample scores
  • predictions/ — HTML predictions for each sample
  • charts/ — Visualization PNGs

Links