| --- |
| dataset_info: |
| features: |
| - name: id |
| dtype: string |
| - name: image |
| dtype: image |
| - name: question |
| dtype: string |
| - name: answer |
| dtype: string |
| splits: |
| - name: train |
| num_bytes: 11563543600 |
| num_examples: 16450 |
| - name: test |
| num_bytes: 1638512366 |
| num_examples: 3159 |
| download_size: 15933142593 |
| dataset_size: 13202055966 |
| configs: |
| - config_name: default |
| data_files: |
| - split: train |
| path: data/train-* |
| - split: test |
| path: data/test-* |
| --- |
| |
| # TabComp π |
| **A Benchmark for OCR-Free Visual Table Reading Comprehension** |
|
|
| This dataset accompanies the paper [TabComp: A Dataset for Visual Table Reading Comprehension](https://aclanthology.org/2025.findings-naacl.320.pdf) |
|
|
| TabComp evaluates **Vision-Language Models (VLMs)** on their ability to **read, understand, and reason over table images** without relying on OCR, using **generative question answering**. |
|
|
| ## π Why TabComp? |
| Modern VLMs perform well on general VQA but struggle with **tables**, which require: |
| - Structured reasoning across rows/columns |
| - Understanding layout + text jointly |
| - Multi-step inference over semi-structured data |
|
|
| π TabComp isolates this challenge and provides a **focused benchmark for table understanding**. |
| ## π Dataset Overview |
|
|
| - **Images:** 3,318 table images |
| - **QA pairs:** 19,610 |
| - **Answer type:** Generative (natural language) |
| - **Domain:** Industrial documents |
| - **Text types:** Printed + handwritten |
|
|
| ### Task Definition |
|
|
| Given: |
| - A **table image** |
| - A **question** |
|
|
| Generate: |
| - A **natural language answer** requiring table comprehension |
| ## π§ What Makes It Challenging? |
| - β No OCR signals |
| - β
Dense textual + structural information |
| - β
Long-range dependencies across table cells |
| - β
Generative answers (not extractive spans) |
| ## π Data Format |
|
|
| π Leaderboard (Baseline Results) |
| Performance on TabComp (generative metrics): |
| | Model | Setting | B-4 β | ROUGE-L β | BERTScore β | METEOR β | |
| | ----------- | ---------- | --------- | --------- | ----------- | --------- | |
| | Donut-base | Fine-tuned | **42.69** | 37.29 | 83.38 | **60.14** | |
| | Donut-base | End-to-end | 28.59 | 32.24 | 85.06 | 47.19 | |
| | Donut-proto | Fine-tuned | 6.49 | 17.84 | 73.26 | 19.80 | |
| | Donut-proto | End-to-end | 34.87 | 37.02 | 87.74 | 56.49 | |
| | UReader | Zero-shot | 28.14 | **37.64** | **88.04** | 20.71 | |
|
|
| Full metrics (BLEU-1/2/3/4, CIDEr) available in the paper. |
|
|
| ## We welcome: |
| - New model evaluations |
| - Error analysis |
| - Extensions to multilingual / multi-table settings |
|
|
| ## Contact |
|
|
| For collaboration, email **Somraj Gautam** gautam.8@iitj.ac.in |