Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Formats:
json
Sub-tasks:
document-retrieval
Languages:
English
Size:
100K - 1M
License:
File size: 6,548 Bytes
4962c33 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 | ---
annotations_creators:
- derived
language:
- eng
license: cc-by-4.0
multilinguality: monolingual
task_categories:
- text-retrieval
task_ids:
- document-retrieval
tags:
- table-retrieval
- text
pretty_name: NQTables
config_names:
- default
- queries
- corpus_linearized
- corpus_md
- corpus_structure
dataset_info:
- config_name: default
features:
- name: qid
dtype: string
- name: did
dtype: string
- name: score
dtype: int32
splits:
- name: train
num_bytes: 1044168
num_examples: 9594
- name: dev
num_bytes: 117198
num_examples: 1068
- name: test
num_bytes: 103735
num_examples: 966
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: train_queries
num_bytes: 955578
num_examples: 9594
- name: dev_queries
num_bytes: 106125
num_examples: 1068
- name: test_queries
num_bytes: 94603
num_examples: 966
- config_name: corpus_linearized
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus_linearized
num_bytes: 416763646
num_examples: 169898
- config_name: corpus_md
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus_md
num_bytes: 448109052
num_examples: 169898
- config_name: corpus_structure
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: meta_data
dtype: string
- name: headers
sequence: string
- name: cells
sequence: string
splits:
- name: corpus_structure
num_bytes: 859992305
num_examples: 169898
configs:
- config_name: default
data_files:
- split: train
path: train_qrels.jsonl
- split: dev
path: dev_qrels.jsonl
- split: test
path: test_qrels.jsonl
- config_name: queries
data_files:
- split: train_queries
path: train_queries.jsonl
- split: dev_queries
path: dev_queries.jsonl
- split: test_queries
path: test_queries.jsonl
- config_name: corpus_linearized
data_files:
- split: corpus_linearized
path: corpus_linearized.jsonl
- config_name: corpus_md
data_files:
- split: corpus_md
path: corpus_md.jsonl
- config_name: corpus_structure
data_files:
- split: corpus_structure
path: corpus_structure.jsonl
---
# NQTables Retrieval
This dataset is part of a Table + Text retrieval benchmark. Includes queries and relevance judgments across train, dev, test split(s), with corpus in 3 format(s): `corpus_linearized`, `corpus_md`, `corpus_structure`.
## Configs
| Config | Description | Split(s) |
|---|---|---|
| `default` | Relevance judgments (qrels): `qid`, `did`, `score` | `train`, `dev`, `test` |
| `queries` | Query IDs and text | `train_queries`, `dev_queries`, `test_queries` |
| `corpus_linearized` | Linearized table representation | `corpus_linearized` |
| `corpus_md` | Markdown table representation | `corpus_md` |
| `corpus_structure` | Structured corpus with `headers`, `cells`, `meta_data`. `text` field corresponds to linearized Text + Table. | `corpus_structure` |
## `corpus_structure` additional fields
| Field | Type | Description |
|---|---|---|
| `meta_data` | string | Table metadata / caption |
| `headers` | list[string] | Column headers |
| `cells` | list[string] | Flattened cell values |
## TableIR Benchmark Statistics
| Dataset | Structured | #Train | #Dev | #Test | #Corpus |
|---|:---:|---:|---:|---:|---:|
| OpenWikiTables | ✓ | 53.8k | 6.6k | 6.6k | 24.7k |
| NQTables | ✓ | 9.6k | 1.1k | 1k | 170k |
| FeTaQA | ✓ | 7.3k | 1k | 2k | 10.3k |
| OTT-QA (small) | ✓ | 41.5k | 2.2k | -- | 8.8k |
| MultiHierTT | ✗ | -- | 929 | -- | 9.9k |
| AIT-QA | ✗ | -- | -- | 515 | 1.9k |
| StatcanRetrieval | ✗ | -- | -- | 870 | 5.9k |
| watsonxDocsQA | ✗ | -- | -- | 30 | 1.1k |
## Citation
If you use **TableIR Eval: Table-Text IR Evaluation Collection**, please cite:
```bibtex
@misc{doshi2026tableir,
title = {TableIR Eval: Table-Text IR Evaluation Collection},
author = {Doshi, Meet and Boni, Odellia and Kumar, Vishwajeet and Sen, Jaydeep and Joshi, Sachindra},
year = {2026},
institution = {IBM Research},
howpublished = {https://huggingface.co/collections/ibm-research/table-text-ir-evaluation},
note = {Hugging Face dataset collection}
}
```
All credit goes to original authors. Please cite their work:
```bibtex
@inproceedings{herzig-etal-2021-open,
title = "Open Domain Question Answering over Tables via Dense Retrieval",
author = {Herzig, Jonathan and
M{\"u}ller, Thomas and
Krichene, Syrine and
Eisenschlos, Julian},
editor = "Toutanova, Kristina and
Rumshisky, Anna and
Zettlemoyer, Luke and
Hakkani-Tur, Dilek and
Beltagy, Iz and
Bethard, Steven and
Cotterell, Ryan and
Chakraborty, Tanmoy and
Zhou, Yichao",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.43/",
doi = "10.18653/v1/2021.naacl-main.43",
pages = "512--519",
abstract = "Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages. In this work, we tackle open-domain QA over tables for the first time, and show that retrieval can be improved by a retriever designed to handle tabular context. We present an effective pre-training procedure for our retriever and improve retrieval quality with mined hard negatives. As relevant datasets are missing, we extract a subset of Natural Questions (Kwiatkowski et al., 2019) into a Table QA dataset. We find that our retriever improves retrieval results from 72.0 to 81.1 recall@10 and end-to-end QA results from 33.8 to 37.7 exact match, over a BERT based retriever."
}
``` |