add link to pre-print
Browse files
README.md
CHANGED
|
@@ -29,13 +29,13 @@ size_categories:
|
|
| 29 |
# TableEval dataset
|
| 30 |
|
| 31 |
[](https://github.com/esborisova/TableEval-Study)
|
| 32 |
-
[](https://github.com/esborisova/TableEval-Study)
|
| 32 |
+
[](https://arxiv.org/abs/2507.00152)
|
| 33 |
|
| 34 |
**TableEval** is developed to benchmark and compare the performance of (M)LLMs on tables from scientific vs. non-scientific sources, represented as images vs. text.
|
| 35 |
It comprises six data subsets derived from the test sets of existing benchmarks for question answering (QA) and table-to-text (T2T) tasks, containing a total of **3017 tables** and **11312 instances**.
|
| 36 |
The scienfific subset includes tables from pre-prints and peer-reviewed scholarly publications, while the non-scientific subset involves tables from Wikipedia and financial reports.
|
| 37 |
Each table is available as a **PNG** image and in four textual formats: **HTML**, **XML**, **LaTeX**, and **Dictionary (Dict)**.
|
| 38 |
+
All task annotations are taken from the source datasets. Please, refer to the [Table Understanding and (Multimodal) LLMs: A Cross-Domain Case Study on Scientific vs. Non-Scientific Data](https://arxiv.org/abs/2507.00152) paper for more details.
|
| 39 |
|
| 40 |
|
| 41 |
## Overview and statistics
|