Table + Text IR Evaluation
Collection
An evaluation suite created for benchmarking of retrieval models on Table+Text retrieval datasets.
•
8 items
•
Updated
qid
stringlengths 3
5
| did
stringlengths 15
28
| score
int32 1
1
|
|---|---|---|
q-0
|
United-2018_27.md
| 1
|
q-0
|
United-2018_41.md
| 1
|
q-0
|
United-2017_32.md
| 1
|
q-0
|
United-2017_29.md
| 1
|
q-0
|
United-2017_48.md
| 1
|
q-0
|
United-2017_30.md
| 1
|
q-0
|
United-2017_54.md
| 1
|
q-0
|
United-2018_22.md
| 1
|
q-0
|
United-2017_4.md
| 1
|
q-0
|
United-2018_47.md
| 1
|
q-0
|
United-2018_4.md
| 1
|
q-1
|
United-2019_29.md
| 1
|
q-1
|
United-2018_22.md
| 1
|
q-1
|
United-2018_25.md
| 1
|
q-1
|
United-2019_4.md
| 1
|
q-1
|
United-2019_26.md
| 1
|
q-1
|
United-2018_4.md
| 1
|
q-2
|
United-2018_23.md
| 1
|
q-2
|
United-2019_29.md
| 1
|
q-2
|
United-2018_22.md
| 1
|
q-2
|
United-2018_25.md
| 1
|
q-2
|
United-2019_4.md
| 1
|
q-2
|
United-2019_26.md
| 1
|
q-2
|
United-2018_4.md
| 1
|
q-3
|
United-2019_4.md
| 1
|
q-3
|
United-2018_4.md
| 1
|
q-4
|
United-2018_27.md
| 1
|
q-4
|
United-2018_41.md
| 1
|
q-4
|
United-2017_32.md
| 1
|
q-4
|
United-2017_29.md
| 1
|
q-4
|
United-2017_48.md
| 1
|
q-4
|
United-2017_30.md
| 1
|
q-4
|
United-2017_54.md
| 1
|
q-4
|
United-2017_4.md
| 1
|
q-4
|
United-2019_26.md
| 1
|
q-4
|
United-2018_4.md
| 1
|
q-5
|
United-2018_27.md
| 1
|
q-5
|
United-2017_32.md
| 1
|
q-5
|
United-2017_29.md
| 1
|
q-5
|
United-2017_48.md
| 1
|
q-5
|
United-2017_30.md
| 1
|
q-5
|
United-2017_54.md
| 1
|
q-5
|
United-2017_4.md
| 1
|
q-5
|
United-2018_22.md
| 1
|
q-5
|
United-2018_47.md
| 1
|
q-5
|
United-2019_26.md
| 1
|
q-5
|
United-2018_4.md
| 1
|
q-6
|
United-2018_27.md
| 1
|
q-6
|
United-2018_41.md
| 1
|
q-6
|
United-2017_32.md
| 1
|
q-6
|
United-2017_29.md
| 1
|
q-6
|
United-2017_48.md
| 1
|
q-6
|
United-2017_54.md
| 1
|
q-6
|
United-2017_4.md
| 1
|
q-6
|
United-2018_47.md
| 1
|
q-6
|
United-2018_4.md
| 1
|
q-7
|
United-2018_22.md
| 1
|
q-8
|
United-2018_22.md
| 1
|
q-9
|
United-2019_26.md
| 1
|
q-9
|
United-2018_22.md
| 1
|
q-10
|
United-2018_22.md
| 1
|
q-11
|
United-2019_26.md
| 1
|
q-11
|
United-2018_22.md
| 1
|
q-12
|
United-2019_26.md
| 1
|
q-12
|
United-2018_22.md
| 1
|
q-13
|
United-2019_26.md
| 1
|
q-13
|
United-2018_22.md
| 1
|
q-14
|
United-2019_26.md
| 1
|
q-14
|
United-2018_22.md
| 1
|
q-15
|
United-2018_22.md
| 1
|
q-16
|
United-2018_23.md
| 1
|
q-16
|
United-2018_22.md
| 1
|
q-16
|
United-2018_24.md
| 1
|
q-17
|
United-2018_23.md
| 1
|
q-17
|
United-2018_24.md
| 1
|
q-18
|
United-2018_27.md
| 1
|
q-19
|
United-2018_27.md
| 1
|
q-20
|
United-2018_27.md
| 1
|
q-21
|
United-2018_27.md
| 1
|
q-21
|
United-2017_29.md
| 1
|
q-22
|
United-2018_27.md
| 1
|
q-22
|
United-2017_29.md
| 1
|
q-23
|
United-2018_27.md
| 1
|
q-24
|
United-2018_41.md
| 1
|
q-24
|
United-2018_24.md
| 1
|
q-24
|
United-2019_48.md
| 1
|
q-24
|
United-2019_42.md
| 1
|
q-24
|
United-2018_26.md
| 1
|
q-24
|
United-2018_47.md
| 1
|
q-25
|
United-2018_4.md
| 1
|
q-25
|
United-2018_41.md
| 1
|
q-25
|
United-2019_29.md
| 1
|
q-25
|
United-2018_47.md
| 1
|
q-25
|
United-2018_25.md
| 1
|
q-25
|
United-2018_9.md
| 1
|
q-25
|
United-2019_42.md
| 1
|
q-25
|
United-2019_48.md
| 1
|
q-25
|
United-2019_4.md
| 1
|
q-25
|
United-2019_28.md
| 1
|
q-26
|
United-2018_47.md
| 1
|
This dataset is part of a Table + Text retrieval benchmark. Includes queries and relevance judgments across test split(s), with corpus in 1 format(s): corpus.
| Config | Description | Split(s) |
|---|---|---|
default |
Relevance judgments (qrels): qid, did, score |
test |
queries |
Query IDs and text | test_queries |
corpus |
Plain text corpus: _id, title, text |
corpus |
| Dataset | Structured | #Train | #Dev | #Test | #Corpus |
|---|---|---|---|---|---|
| OpenWikiTables | ✓ | 53.8k | 6.6k | 6.6k | 24.7k |
| NQTables | ✓ | 9.6k | 1.1k | 1k | 170k |
| FeTaQA | ✓ | 7.3k | 1k | 2k | 10.3k |
| OTT-QA (small) | ✓ | 41.5k | 2.2k | -- | 8.8k |
| MultiHierTT | ✗ | -- | 929 | -- | 9.9k |
| AIT-QA | ✗ | -- | -- | 515 | 1.9k |
| StatcanRetrieval | ✗ | -- | -- | 870 | 5.9k |
| watsonxDocsQA | ✗ | -- | -- | 30 | 1.1k |
If you use TableIR Eval: Table-Text IR Evaluation Collection, please cite:
@misc{doshi2026tableir,
title = {TableIR Eval: Table-Text IR Evaluation Collection},
author = {Doshi, Meet and Boni, Odellia and Kumar, Vishwajeet and Sen, Jaydeep and Joshi, Sachindra},
year = {2026},
institution = {IBM Research},
howpublished = {https://huggingface.co/collections/ibm-research/table-text-ir-evaluation},
note = {Hugging Face dataset collection}
}
All credit goes to original authors. Please cite their work:
@misc{katsis2021aitqa,
title={AIT-QA: Question Answering Dataset over Complex Tables in the Airline Industry},
author={Yannis Katsis and Saneem Chemmengath and Vishwajeet Kumar and Samarth Bharadwaj and Mustafa Canim and Michael Glass and Alfio Gliozzo and Feifei Pan and Jaydeep Sen and Karthik Sankaranarayanan and Soumen Chakrabarti},
year={2021},
eprint={2106.12944},
archivePrefix={arXiv},
primaryClass={cs.CL}
}