license: mit
viewer: true
task_categories:
- table-question-answering
- table-to-text
language:
- en
pretty_name: TableEval
configs:
- config_name: default
data_files:
- split: comtqa_fin
path: ComTQA/PubTab1M/comtqa_pubtab1m.json
- split: comtqa_pmc
path: ComTQA/FinTabNet/fintabnet.json
- split: logic2text
path: ComTQA/Logic2Text/logic2text.json
- split: logicnlg
path: ComTQA/LogicNLG/logicnlg.json
- split: SciGen
path: ComTQA/SciGen/scigen.json
- split: numericnlg
path: ComTQA/numericNLG/numericnlg.json
TableEval dataset
TableEval is developed to benchmark and compare the performance of (M)LLMs on tables from scientific vs. non-scientific sources, represented as images vs. text. It comprises six data subsets derived from existing benchmarks for question answering (QA) and table-to-text (T2T) tasks, containing a total of 3017 tables and 11312 instances. The scienfific subset includes tables from pre-prints and peer-reviewed scholarly publications, while the non-scientific subset involves tables from Wikipedia and financial reports. Each table is available as a PNG image and in four textual formats: HTML, XML, LaTeX, and Dictionary (Dict). All task annotations are taken from the source datasets.
Overview and statistics
**Symbol ⬇️ indicates formats already available in the given corpus, while 📄 and ⚙️ denote formats extracted from the table source files (e. g., article PDF, Wikipedia page) and generated from other formats in this study, respectively.
Number of tables per format and data subset
| Dataset | Image | Dict | LaTeX | HTML | XML |
|---|---|---|---|---|---|
| ComTQA (PubTables-1M) | 932 | 932 | 932 | 932 | 932 |
| numericNLG | 135 | 135 | 135 | 135 | 135 |
| SciGen | 1035 | 1035 | 928 | 985 | 961 |
| ComTQA (FinTabNet) | 659 | 659 | 659 | 659 | 659 |
| LogicNLG | 184 | 184 | 184 | 184 | 184 |
| Logic2Text | 72 | 72 | 72 | 72 | 72 |
| Total | 3017 | 3017 | 2910 | 2967 | 2943 |
Total number of instances per format and data subset
| Dataset | Image | Dict | LaTeX | HTML | XML |
|---|---|---|---|---|---|
| ComTQA (PubTables-1M) | 6232 | 6232 | 6232 | 6232 | 6232 |
| numericNLG | 135 | 135 | 135 | 135 | 135 |
| SciGen | 1035 | 1035 | 928 | 985 | 961 |
| ComTQA (FinTabNet) | 2838 | 2838 | 2838 | 2838 | 2838 |
| LogicNLG | 917 | 917 | 917 | 917 | 917 |
| Logic2Text | 155 | 155 | 155 | 155 | 155 |
| Total | 11312 | 11312 | 11205 | 11262 | 11238 |
Structure
├── ComTQA
│ ├── FinTabNet
│ │ ├── comtqa_fintabnet.json
│ │ ├── comtqa_fintabnet_imgs.zip
│ ├── PubTab1M
│ │ ├── comtqa_pubtab1m.json
│ │ ├── comtqa_pubtab1m_imgs.zip
│ ├── Logic2Text
│ │ ├── logic2text.json
│ │ ├── logic2text_imgs.zip
│ ├── LogicNLG
│ │ ├── logicnlg.json
│ │ ├── logicnlg_imgs.zip
│ ├── SciGen
│ │ ├── scigen.json
│ │ ├── scigen_imgs.zip
│ ├── numericNLG
│ │ ├── numericnlg.json
└── └── └── numericnlg_imgs.zip
For more details on each subset, please, refer to the respective README.md files: FinTabNet, PubTab1M, Logic2Text, LogicNLG, SciGen, numericNLG.
Citation
TBA
Funding
This work has received funding through the DFG project NFDI4DS (no. 460234259).