Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Formats:
json
Sub-tasks:
document-retrieval
Languages:
English
Size:
10K - 100K
License:
File size: 4,524 Bytes
7bd2e8a | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 | ---
annotations_creators:
- derived
language:
- eng
license: cc-by-4.0
multilinguality: monolingual
task_categories:
- text-retrieval
task_ids:
- document-retrieval
tags:
- table-retrieval
- text
pretty_name: OTT-QA
config_names:
- default
- queries
- corpus_linearized
- corpus_md
- corpus_structure
dataset_info:
- config_name: default
features:
- name: qid
dtype: string
- name: did
dtype: string
- name: score
dtype: int32
splits:
- name: dev
num_bytes: 185688
num_examples: 2214
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: dev_queries
num_bytes: 336275
num_examples: 2214
- config_name: corpus_linearized
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus_linearized
num_bytes: 19907385
num_examples: 8891
- config_name: corpus_md
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus_md
num_bytes: 22363801
num_examples: 8891
- config_name: corpus_structure
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: meta_data
dtype: string
- name: headers
sequence: string
- name: cells
sequence: string
splits:
- name: corpus_structure
num_bytes: 41759044
num_examples: 8891
configs:
- config_name: default
data_files:
- split: dev
path: dev_qrels.jsonl
- config_name: queries
data_files:
- split: dev_queries
path: dev_queries.jsonl
- config_name: corpus_linearized
data_files:
- split: corpus_linearized
path: corpus_linearized.jsonl
- config_name: corpus_md
data_files:
- split: corpus_md
path: corpus_md.jsonl
- config_name: corpus_structure
data_files:
- split: corpus_structure
path: corpus_structure.jsonl
---
# OTT-QA Retrieval
This dataset is part of a Table + Text retrieval benchmark. Includes queries and relevance judgments across dev split(s), with corpus in 3 format(s): `corpus_linearized`, `corpus_md`, `corpus_structure`.
## Configs
| Config | Description | Split(s) |
|---|---|---|
| `default` | Relevance judgments (qrels): `qid`, `did`, `score` | `dev` |
| `queries` | Query IDs and text | `dev_queries` |
| `corpus_linearized` | Linearized table representation | `corpus_linearized` |
| `corpus_md` | Markdown table representation | `corpus_md` |
| `corpus_structure` | Structured corpus with `headers`, `cells`, `meta_data`. `text` field corresponds to linearized Text + Table. | `corpus_structure` |
## `corpus_structure` additional fields
| Field | Type | Description |
|---|---|---|
| `meta_data` | string | Table metadata / caption |
| `headers` | list[string] | Column headers |
| `cells` | list[string] | Flattened cell values |
## TableIR Benchmark Statistics
| Dataset | Structured | #Train | #Dev | #Test | #Corpus |
|---|:---:|---:|---:|---:|---:|
| OpenWikiTables | ✓ | 53.8k | 6.6k | 6.6k | 24.7k |
| NQTables | ✓ | 9.6k | 1.1k | 1k | 170k |
| FeTaQA | ✓ | 7.3k | 1k | 2k | 10.3k |
| OTT-QA (small) | ✓ | 41.5k | 2.2k | -- | 8.8k |
| MultiHierTT | ✗ | -- | 929 | -- | 9.9k |
| AIT-QA | ✗ | -- | -- | 515 | 1.9k |
| StatcanRetrieval | ✗ | -- | -- | 870 | 5.9k |
| watsonxDocsQA | ✗ | -- | -- | 30 | 1.1k |
## Citation
If you use **TableIR Eval: Table-Text IR Evaluation Collection**, please cite:
```bibtex
@misc{doshi2026tableir,
title = {TableIR Eval: Table-Text IR Evaluation Collection},
author = {Doshi, Meet and Boni, Odellia and Kumar, Vishwajeet and Sen, Jaydeep and Joshi, Sachindra},
year = {2026},
institution = {IBM Research},
howpublished = {https://huggingface.co/collections/ibm-research/table-text-ir-evaluation},
note = {Hugging Face dataset collection}
}
```
All credit goes to original authors. Please cite their work:
```bibtex
@article{chen2021ottqa,
title={Open Question Answering over Tables and Text},
author={Wenhu Chen, Ming-wei Chang, Eva Schlinger, William Wang, William Cohen},
journal={Proceedings of ICLR 2021},
year={2021}
}
```
|