NQTablesRetrieval / README.md
meetdoshi90's picture
Upload folder using huggingface_hub
4962c33 verified
metadata
annotations_creators:
  - derived
language:
  - eng
license: cc-by-4.0
multilinguality: monolingual
task_categories:
  - text-retrieval
task_ids:
  - document-retrieval
tags:
  - table-retrieval
  - text
pretty_name: NQTables
config_names:
  - default
  - queries
  - corpus_linearized
  - corpus_md
  - corpus_structure
dataset_info:
  - config_name: default
    features:
      - name: qid
        dtype: string
      - name: did
        dtype: string
      - name: score
        dtype: int32
    splits:
      - name: train
        num_bytes: 1044168
        num_examples: 9594
      - name: dev
        num_bytes: 117198
        num_examples: 1068
      - name: test
        num_bytes: 103735
        num_examples: 966
  - config_name: queries
    features:
      - name: _id
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train_queries
        num_bytes: 955578
        num_examples: 9594
      - name: dev_queries
        num_bytes: 106125
        num_examples: 1068
      - name: test_queries
        num_bytes: 94603
        num_examples: 966
  - config_name: corpus_linearized
    features:
      - name: _id
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: corpus_linearized
        num_bytes: 416763646
        num_examples: 169898
  - config_name: corpus_md
    features:
      - name: _id
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: corpus_md
        num_bytes: 448109052
        num_examples: 169898
  - config_name: corpus_structure
    features:
      - name: _id
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
      - name: meta_data
        dtype: string
      - name: headers
        sequence: string
      - name: cells
        sequence: string
    splits:
      - name: corpus_structure
        num_bytes: 859992305
        num_examples: 169898
configs:
  - config_name: default
    data_files:
      - split: train
        path: train_qrels.jsonl
      - split: dev
        path: dev_qrels.jsonl
      - split: test
        path: test_qrels.jsonl
  - config_name: queries
    data_files:
      - split: train_queries
        path: train_queries.jsonl
      - split: dev_queries
        path: dev_queries.jsonl
      - split: test_queries
        path: test_queries.jsonl
  - config_name: corpus_linearized
    data_files:
      - split: corpus_linearized
        path: corpus_linearized.jsonl
  - config_name: corpus_md
    data_files:
      - split: corpus_md
        path: corpus_md.jsonl
  - config_name: corpus_structure
    data_files:
      - split: corpus_structure
        path: corpus_structure.jsonl

NQTables Retrieval

This dataset is part of a Table + Text retrieval benchmark. Includes queries and relevance judgments across train, dev, test split(s), with corpus in 3 format(s): corpus_linearized, corpus_md, corpus_structure.

Configs

Config Description Split(s)
default Relevance judgments (qrels): qid, did, score train, dev, test
queries Query IDs and text train_queries, dev_queries, test_queries
corpus_linearized Linearized table representation corpus_linearized
corpus_md Markdown table representation corpus_md
corpus_structure Structured corpus with headers, cells, meta_data. text field corresponds to linearized Text + Table. corpus_structure

corpus_structure additional fields

Field Type Description
meta_data string Table metadata / caption
headers list[string] Column headers
cells list[string] Flattened cell values

TableIR Benchmark Statistics

Dataset Structured #Train #Dev #Test #Corpus
OpenWikiTables 53.8k 6.6k 6.6k 24.7k
NQTables 9.6k 1.1k 1k 170k
FeTaQA 7.3k 1k 2k 10.3k
OTT-QA (small) 41.5k 2.2k -- 8.8k
MultiHierTT -- 929 -- 9.9k
AIT-QA -- -- 515 1.9k
StatcanRetrieval -- -- 870 5.9k
watsonxDocsQA -- -- 30 1.1k

Citation

If you use TableIR Eval: Table-Text IR Evaluation Collection, please cite:

@misc{doshi2026tableir,
  title        = {TableIR Eval: Table-Text IR Evaluation Collection},
  author       = {Doshi, Meet and Boni, Odellia and Kumar, Vishwajeet and Sen, Jaydeep and Joshi, Sachindra},
  year         = {2026},
  institution  = {IBM Research},
  howpublished = {https://huggingface.co/collections/ibm-research/table-text-ir-evaluation},
  note         = {Hugging Face dataset collection}
}

All credit goes to original authors. Please cite their work:

@inproceedings{herzig-etal-2021-open,
    title = "Open Domain Question Answering over Tables via Dense Retrieval",
    author = {Herzig, Jonathan  and
      M{\"u}ller, Thomas  and
      Krichene, Syrine  and
      Eisenschlos, Julian},
    editor = "Toutanova, Kristina  and
      Rumshisky, Anna  and
      Zettlemoyer, Luke  and
      Hakkani-Tur, Dilek  and
      Beltagy, Iz  and
      Bethard, Steven  and
      Cotterell, Ryan  and
      Chakraborty, Tanmoy  and
      Zhou, Yichao",
    booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
    month = jun,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.naacl-main.43/",
    doi = "10.18653/v1/2021.naacl-main.43",
    pages = "512--519",
    abstract = "Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages. In this work, we tackle open-domain QA over tables for the first time, and show that retrieval can be improved by a retriever designed to handle tabular context. We present an effective pre-training procedure for our retriever and improve retrieval quality with mined hard negatives. As relevant datasets are missing, we extract a subset of Natural Questions (Kwiatkowski et al., 2019) into a Table QA dataset. We find that our retriever improves retrieval results from 72.0 to 81.1 recall@10 and end-to-end QA results from 33.8 to 37.7 exact match, over a BERT based retriever."
}