Datasets:
Improve dataset card: Add task category, update paper link, and add sample usage (#1)
Browse files- Improve dataset card: Add task category, update paper link, and add sample usage (13e13196a38214e065c7780a7b1391909c842e7e)
Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>
README.md
CHANGED
|
@@ -1,14 +1,15 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
| 2 |
configs:
|
| 3 |
- config_name: table
|
| 4 |
-
data_files:
|
| 5 |
- config_name: test_query
|
| 6 |
-
data_files:
|
| 7 |
-
|
| 8 |
-
license: mit
|
| 9 |
---
|
| 10 |
|
| 11 |
-
π [Paper](https://
|
| 12 |
|
| 13 |
For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
|
| 14 |
| Dataset | Link |
|
|
@@ -22,14 +23,40 @@ For MultiTableQA, we release a comprehensive benchmark, including five different
|
|
| 22 |
|
| 23 |
MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
|
| 24 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
---
|
| 26 |
# Citation
|
| 27 |
|
| 28 |
If you find our work useful, please cite:
|
| 29 |
|
| 30 |
```bibtex
|
| 31 |
-
@misc{
|
| 32 |
-
title={
|
| 33 |
author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He},
|
| 34 |
year={2025},
|
| 35 |
eprint={2504.01346},
|
|
|
|
| 1 |
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- table-question-answering
|
| 5 |
configs:
|
| 6 |
- config_name: table
|
| 7 |
+
data_files: sqa_table.jsonl
|
| 8 |
- config_name: test_query
|
| 9 |
+
data_files: sqa_query.jsonl
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
+
π [Paper](https://huggingface.co/papers/2504.01346) | π¨π»βπ» [Code](https://github.com/jiaruzouu/T-RAG)
|
| 13 |
|
| 14 |
For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
|
| 15 |
| Dataset | Link |
|
|
|
|
| 23 |
|
| 24 |
MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
|
| 25 |
|
| 26 |
+
## Sample Usage
|
| 27 |
+
|
| 28 |
+
This dataset (`MultiTableQA-SQA`) is part of the larger **MultiTableQA** benchmark. To prepare the full benchmark, you can follow these steps from the official T-RAG GitHub repository.
|
| 29 |
+
|
| 30 |
+
First, clone the repository and set up the environment:
|
| 31 |
+
|
| 32 |
+
```bash
|
| 33 |
+
git clone https://github.com/jiaruzouu/T-RAG.git
|
| 34 |
+
cd T-RAG
|
| 35 |
+
|
| 36 |
+
conda create -n trag python=3.11.9
|
| 37 |
+
conda activate trag
|
| 38 |
+
|
| 39 |
+
# Install dependencies
|
| 40 |
+
pip install -r requirements.txt
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
Then, navigate to the `table2graph` directory and run the data preparation script:
|
| 44 |
+
|
| 45 |
+
```bash
|
| 46 |
+
cd table2graph
|
| 47 |
+
bash scripts/prepare_data.sh
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
This script will automatically fetch the source tables, apply decomposition (row/column splitting), and generate the benchmark splits, including the `MultiTableQA-SQA` data available in this repository.
|
| 51 |
+
|
| 52 |
---
|
| 53 |
# Citation
|
| 54 |
|
| 55 |
If you find our work useful, please cite:
|
| 56 |
|
| 57 |
```bibtex
|
| 58 |
+
@misc{zou2025rag,
|
| 59 |
+
title={RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking},
|
| 60 |
author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He},
|
| 61 |
year={2025},
|
| 62 |
eprint={2504.01346},
|