Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Formats:
json
Sub-tasks:
document-retrieval
Languages:
English
Size:
10K - 100K
License:
metadata
annotations_creators:
- derived
language:
- eng
license: cc-by-4.0
multilinguality: monolingual
task_categories:
- text-retrieval
task_ids:
- document-retrieval
tags:
- table-retrieval
- text
pretty_name: MultiHierTT
config_names:
- default
- queries
- corpus
dataset_info:
- config_name: default
features:
- name: qid
dtype: string
- name: did
dtype: string
- name: score
dtype: int32
splits:
- name: dev
num_bytes: 45922
num_examples: 1068
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: dev_queries
num_bytes: 212136
num_examples: 929
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 33851099
num_examples: 9902
configs:
- config_name: default
data_files:
- split: dev
path: dev_qrels.jsonl
- config_name: queries
data_files:
- split: dev_queries
path: dev_queries.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
MultiHierTT Retrieval
This dataset is part of a Table + Text retrieval benchmark. Includes queries and relevance judgments across dev split(s), with corpus in 1 format(s): corpus.
Configs
| Config | Description | Split(s) |
|---|---|---|
default |
Relevance judgments (qrels): qid, did, score |
dev |
queries |
Query IDs and text | dev_queries |
corpus |
Plain text corpus: _id, title, text |
corpus |
TableIR Benchmark Statistics
| Dataset | Structured | #Train | #Dev | #Test | #Corpus |
|---|---|---|---|---|---|
| OpenWikiTables | ✓ | 53.8k | 6.6k | 6.6k | 24.7k |
| NQTables | ✓ | 9.6k | 1.1k | 1k | 170k |
| FeTaQA | ✓ | 7.3k | 1k | 2k | 10.3k |
| OTT-QA (small) | ✓ | 41.5k | 2.2k | -- | 8.8k |
| MultiHierTT | ✗ | -- | 929 | -- | 9.9k |
| AIT-QA | ✗ | -- | -- | 515 | 1.9k |
| StatcanRetrieval | ✗ | -- | -- | 870 | 5.9k |
| watsonxDocsQA | ✗ | -- | -- | 30 | 1.1k |
Citation
If you use TableIR Eval: Table-Text IR Evaluation Collection, please cite:
@misc{doshi2026tableir,
title = {TableIR Eval: Table-Text IR Evaluation Collection},
author = {Doshi, Meet and Boni, Odellia and Kumar, Vishwajeet and Sen, Jaydeep and Joshi, Sachindra},
year = {2026},
institution = {IBM Research},
howpublished = {https://huggingface.co/collections/ibm-research/table-text-ir-evaluation},
note = {Hugging Face dataset collection}
}
All credit goes to original authors. Please cite their work:
@inproceedings{zhao-etal-2022-multihiertt,
title = "{M}ulti{H}iertt: Numerical Reasoning over Multi Hierarchical Tabular and Textual Data",
author = "Zhao, Yilun and
Li, Yunxiang and
Li, Chenying and
Zhang, Rui",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.454",
pages = "6588--6600",
}