Improve dataset card for MultiTableQA-TabFact: Add metadata, update links, clarify description, add sample usage, fix citation

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +32 -9
README.md CHANGED
@@ -1,15 +1,28 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
2
  configs:
3
  - config_name: table
4
- data_files: "tabfact_table.jsonl"
5
  - config_name: test_query
6
- data_files: "tabfact_query.jsonl"
7
-
8
- license: mit
9
  ---
10
- πŸ“„ [Paper](https://arxiv.org/abs/2504.01346) | πŸ‘¨πŸ»β€πŸ’» [Code](https://github.com/jiaruzouu/T-RAG)
11
 
12
- For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
 
 
 
 
13
  | Dataset | Link |
14
  |-----------------------|------|
15
  | MultiTableQA-TATQA | πŸ€— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_TATQA) |
@@ -18,8 +31,18 @@ For MultiTableQA, we release a comprehensive benchmark, including five different
18
  | MultiTableQA-WTQ | πŸ€— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_WTQ) |
19
  | MultiTableQA-HybridQA | πŸ€— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_HybridQA)|
20
 
 
 
 
 
 
 
 
 
 
 
21
 
22
- MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
23
 
24
  ---
25
  # Citation
@@ -27,8 +50,8 @@ MultiTableQA extends the traditional single-table QA setting into a multi-table
27
  If you find our work useful, please cite:
28
 
29
  ```bibtex
30
- @misc{zou2025gtrgraphtableragcrosstablequestion,
31
- title={GTR: Graph-Table-RAG for Cross-Table Question Answering},
32
  author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He},
33
  year={2025},
34
  eprint={2504.01346},
 
1
  ---
2
+ license: mit
3
+ task_categories:
4
+ - table-question-answering
5
+ language:
6
+ - en
7
+ tags:
8
+ - rag
9
+ - tables
10
+ - multi-table
11
+ - question-answering
12
+ - fact-checking
13
+ - llm
14
  configs:
15
  - config_name: table
16
+ data_files: tabfact_table.jsonl
17
  - config_name: test_query
18
+ data_files: tabfact_query.jsonl
 
 
19
  ---
 
20
 
21
+ πŸ“„ [RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking](https://arxiv.org/abs/2504.01346) | πŸ‘¨πŸ»β€πŸ’» [Code](https://github.com/jiaruzouu/T-RAG) | πŸ€— [MultiTableQA Hub Collection](https://huggingface.co/collections/jiaruz2/multitableqa-68dc8d850ea7e168f47cecd8)
22
+
23
+ This repository contains **MultiTableQA-TabFact**, one of the five datasets released as part of the comprehensive **MultiTableQA** benchmark. MultiTableQA-TabFact focuses on **table fact-checking** tasks. The overall MultiTableQA benchmark extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
24
+
25
+ Other datasets in the MultiTableQA benchmark include:
26
  | Dataset | Link |
27
  |-----------------------|------|
28
  | MultiTableQA-TATQA | πŸ€— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_TATQA) |
 
31
  | MultiTableQA-WTQ | πŸ€— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_WTQ) |
32
  | MultiTableQA-HybridQA | πŸ€— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_HybridQA)|
33
 
34
+ ---
35
+
36
+ ### Sample Usage
37
+
38
+ To download and preprocess the **MultiTableQA** benchmark, navigate to the `table2graph` directory in the code repository and run the `prepare_data.sh` script:
39
+
40
+ ```bash
41
+ cd table2graph
42
+ bash scripts/prepare_data.sh
43
+ ```
44
 
45
+ This script will automatically fetch the source tables, apply decomposition (row/column splitting), and generate the benchmark splits.
46
 
47
  ---
48
  # Citation
 
50
  If you find our work useful, please cite:
51
 
52
  ```bibtex
53
+ @misc{zou2025rag,
54
+ title={RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking},
55
  author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He},
56
  year={2025},
57
  eprint={2504.01346},