Update README.md
Browse files
README.md
CHANGED
|
@@ -28,6 +28,9 @@ size_categories:
|
|
| 28 |
|
| 29 |
# TableEval dataset
|
| 30 |
|
|
|
|
|
|
|
|
|
|
| 31 |
**TableEval** is developed to benchmark and compare the performance of (M)LLMs on tables from scientific vs. non-scientific sources, represented as images vs. text.
|
| 32 |
It comprises six data subsets derived from the test sets of existing benchmarks for question answering (QA) and table-to-text (T2T) tasks, containing a total of **3017 tables** and **11312 instances**.
|
| 33 |
The scienfific subset includes tables from pre-prints and peer-reviewed scholarly publications, while the non-scientific subset involves tables from Wikipedia and financial reports.
|
|
|
|
| 28 |
|
| 29 |
# TableEval dataset
|
| 30 |
|
| 31 |
+
[](https://github.com/esborisova/TableEval-Study)
|
| 32 |
+
[](https://openreview.net/pdf?id=umbMEwiTtq)
|
| 33 |
+
|
| 34 |
**TableEval** is developed to benchmark and compare the performance of (M)LLMs on tables from scientific vs. non-scientific sources, represented as images vs. text.
|
| 35 |
It comprises six data subsets derived from the test sets of existing benchmarks for question answering (QA) and table-to-text (T2T) tasks, containing a total of **3017 tables** and **11312 instances**.
|
| 36 |
The scienfific subset includes tables from pre-prints and peer-reviewed scholarly publications, while the non-scientific subset involves tables from Wikipedia and financial reports.
|