Datasets:

Modalities:
Image
Languages:
English
ArXiv:
License:
katebor commited on
Commit
234feaf
·
verified ·
1 Parent(s): 17da067

add link to pre-print

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -29,13 +29,13 @@ size_categories:
29
  # TableEval dataset
30
 
31
  [![GitHub](https://img.shields.io/badge/GitHub-000000?style=flat&logo=github&logoColor=white)](https://github.com/esborisova/TableEval-Study)
32
- [![PDF](https://img.shields.io/badge/PDF-red)](https://openreview.net/pdf?id=umbMEwiTtq)
33
 
34
  **TableEval** is developed to benchmark and compare the performance of (M)LLMs on tables from scientific vs. non-scientific sources, represented as images vs. text.
35
  It comprises six data subsets derived from the test sets of existing benchmarks for question answering (QA) and table-to-text (T2T) tasks, containing a total of **3017 tables** and **11312 instances**.
36
  The scienfific subset includes tables from pre-prints and peer-reviewed scholarly publications, while the non-scientific subset involves tables from Wikipedia and financial reports.
37
  Each table is available as a **PNG** image and in four textual formats: **HTML**, **XML**, **LaTeX**, and **Dictionary (Dict)**.
38
- All task annotations are taken from the source datasets. Please, refer to the [Table Understanding and (Multimodal) LLMs: A Cross-Domain Case Study on Scientific vs. Non-Scientific Data](https://openreview.net/pdf?id=umbMEwiTtq) paper for more details.
39
 
40
 
41
  ## Overview and statistics
 
29
  # TableEval dataset
30
 
31
  [![GitHub](https://img.shields.io/badge/GitHub-000000?style=flat&logo=github&logoColor=white)](https://github.com/esborisova/TableEval-Study)
32
+ [![arXiv](https://img.shields.io/badge/arXiv-darkred)](https://arxiv.org/abs/2507.00152)
33
 
34
  **TableEval** is developed to benchmark and compare the performance of (M)LLMs on tables from scientific vs. non-scientific sources, represented as images vs. text.
35
  It comprises six data subsets derived from the test sets of existing benchmarks for question answering (QA) and table-to-text (T2T) tasks, containing a total of **3017 tables** and **11312 instances**.
36
  The scienfific subset includes tables from pre-prints and peer-reviewed scholarly publications, while the non-scientific subset involves tables from Wikipedia and financial reports.
37
  Each table is available as a **PNG** image and in four textual formats: **HTML**, **XML**, **LaTeX**, and **Dictionary (Dict)**.
38
+ All task annotations are taken from the source datasets. Please, refer to the [Table Understanding and (Multimodal) LLMs: A Cross-Domain Case Study on Scientific vs. Non-Scientific Data](https://arxiv.org/abs/2507.00152) paper for more details.
39
 
40
 
41
  ## Overview and statistics