Table + Text IR Evaluation
Collection
An evaluation suite created for benchmarking of retrieval models on Table+Text retrieval datasets.
•
8 items
•
Updated
qid
string | did
string | score
int32 |
|---|---|---|
test_1
|
5B37710FE7BBD6EFB842FEB7B49B036302E18F81
| 1
|
test_2
|
42AE491240EF740E6A8C5CF32B817E606F554E49
| 1
|
test_3
|
51747F17F413F1F34CFD73D170DE392D874D03DD
| 1
|
test_4
|
B193A2795BDEF17A5D204CDD18188A767E2FE7B7
| 1
|
test_5
|
F003581774D3028EF53E61A002C20A6D36BA8E00
| 1
|
test_6
|
6049D5AA5DE41309E6281534A464ABD6898A758C
| 1
|
test_7
|
78A8C07B83DF1B01276353D098E84F12304636E2
| 1
|
test_8
|
42AE491240EF740E6A8C5CF32B817E606F554E49
| 1
|
test_9
|
38FB0908B90954D96CEFF54BA975DE832286A0A7
| 1
|
test_10
|
B2117B2CD0FEA469149B23FACB6A9F7F32905AFD
| 1
|
test_11
|
F003581774D3028EF53E61A002C20A6D36BA8E00
| 1
|
test_12
|
717B697E0045B5D7DFF6ACC93AD5DEC98E27EBDC
| 1
|
test_13
|
F003581774D3028EF53E61A002C20A6D36BA8E00
| 1
|
test_14
|
43785386700CF73E37A8F76ADC4EF9FB01EE0AEB
| 1
|
test_15
|
4B48EF3D089F3142B1ED604A32873217F89E052F
| 1
|
test_16
|
A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8
| 1
|
test_17
|
315971AE6C6A4EEDE13E9E1449B2A36F548B928F
| 1
|
test_18
|
F003581774D3028EF53E61A002C20A6D36BA8E00
| 1
|
test_19
|
3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D
| 1
|
test_20
|
F5AF4BCC2D0168D2698BEB2A858C24F81A476610
| 1
|
test_21
|
BE7F45C3E17998A50B8414D623007ED668B37C04
| 1
|
test_22
|
82546B72EDBFB76F571CFD06A7009E01615FA054
| 1
|
test_23
|
F003581774D3028EF53E61A002C20A6D36BA8E00
| 1
|
test_24
|
977C81385F7825613F1EDBD3C0DBF44C259BA8D7
| 1
|
test_25
|
B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08
| 1
|
test_26
|
FE2254205E6DD1EE2A4EC62036AB86BC5E084F5D
| 1
|
test_27
|
15D57C8193B99B8525BC2999EF82EF1CD7EAE8AD
| 1
|
test_28
|
ABFAAF84948B090C8EA099FF44CC8CD878371073
| 1
|
test_29
|
F003581774D3028EF53E61A002C20A6D36BA8E00
| 1
|
test_30
|
7946DCF2F69A7420490A7B5CA677C2273DE5764B
| 1
|
This dataset is part of a Table + Text retrieval benchmark. Includes queries and relevance judgments across test split(s), with corpus in 1 format(s): corpus.
| Config | Description | Split(s) |
|---|---|---|
default |
Relevance judgments (qrels): qid, did, score |
test |
queries |
Query IDs and text | test_queries |
corpus |
Plain text corpus: _id, title, text |
corpus |
| Dataset | Structured | #Train | #Dev | #Test | #Corpus |
|---|---|---|---|---|---|
| OpenWikiTables | ✓ | 53.8k | 6.6k | 6.6k | 24.7k |
| NQTables | ✓ | 9.6k | 1.1k | 1k | 170k |
| FeTaQA | ✓ | 7.3k | 1k | 2k | 10.3k |
| OTT-QA (small) | ✓ | 41.5k | 2.2k | -- | 8.8k |
| MultiHierTT | ✗ | -- | 929 | -- | 9.9k |
| AIT-QA | ✗ | -- | -- | 515 | 1.9k |
| StatcanRetrieval | ✗ | -- | -- | 870 | 5.9k |
| watsonxDocsQA | ✗ | -- | -- | 30 | 1.1k |
If you use TableIR Eval: Table-Text IR Evaluation Collection, please cite:
@misc{doshi2026tableir,
title = {TableIR Eval: Table-Text IR Evaluation Collection},
author = {Doshi, Meet and Boni, Odellia and Kumar, Vishwajeet and Sen, Jaydeep and Joshi, Sachindra},
year = {2026},
institution = {IBM Research},
howpublished = {https://huggingface.co/collections/ibm-research/table-text-ir-evaluation},
note = {Hugging Face dataset collection}
}
This repository is just a reformatted version of ibm-research/watsonxDocsQA.
All credit goes to original authors. Please cite their work:
@misc{orbach2025analysishyperparameteroptimizationmethods,
title={An Analysis of Hyper-Parameter Optimization Methods for Retrieval Augmented Generation},
author={Matan Orbach and Ohad Eytan and Benjamin Sznajder and Ariel Gera and Odellia Boni and Yoav Kantor and Gal Bloch and Omri Levy and Hadas Abraham and Nitzan Barzilay and Eyal Shnarch and Michael E. Factor and Shila Ofek-Koifman and Paula Ta-Shma and Assaf Toledo},
year={2025},
eprint={2505.03452},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.03452},
}