Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Formats:
json
Sub-tasks:
document-retrieval
Languages:
English
Size:
1K - 10K
License:
metadata
annotations_creators:
- derived
language:
- eng
license: other
license_name: statcan-dialogue-retrieval-license
license_link: >-
https://huggingface.co/datasets/McGill-NLP/statcan-dialogue-dataset-retrieval/blob/main/LICENSE.md
multilinguality: monolingual
task_categories:
- text-retrieval
task_ids:
- document-retrieval
tags:
- table-retrieval
- text
pretty_name: StatCan
config_names:
- default
- queries
- corpus
dataset_info:
- config_name: default
features:
- name: qid
dtype: string
- name: did
dtype: string
- name: score
dtype: int32
splits:
- name: test
num_bytes: 43500
num_examples: 870
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test_queries
num_bytes: 829463
num_examples: 870
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 40220365
num_examples: 5907
configs:
- config_name: default
data_files:
- split: test
path: test_qrels.jsonl
- config_name: queries
data_files:
- split: test_queries
path: test_queries.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
StatCan Retrieval
This dataset is part of a Table + Text retrieval benchmark. Includes queries and relevance judgments across test split(s), with corpus in 1 format(s): corpus.
Configs
| Config | Description | Split(s) |
|---|---|---|
default |
Relevance judgments (qrels): qid, did, score |
test |
queries |
Query IDs and text | test_queries |
corpus |
Plain text corpus: _id, title, text |
corpus |
TableIR Benchmark Statistics
| Dataset | Structured | #Train | #Dev | #Test | #Corpus |
|---|---|---|---|---|---|
| OpenWikiTables | ✓ | 53.8k | 6.6k | 6.6k | 24.7k |
| NQTables | ✓ | 9.6k | 1.1k | 1k | 170k |
| FeTaQA | ✓ | 7.3k | 1k | 2k | 10.3k |
| OTT-QA (small) | ✓ | 41.5k | 2.2k | -- | 8.8k |
| MultiHierTT | ✗ | -- | 929 | -- | 9.9k |
| AIT-QA | ✗ | -- | -- | 515 | 1.9k |
| StatcanRetrieval | ✗ | -- | -- | 870 | 5.9k |
| watsonxDocsQA | ✗ | -- | -- | 30 | 1.1k |
Citation
If you use TableIR Eval: Table-Text IR Evaluation Collection, please cite:
@misc{doshi2026tableir,
title = {TableIR Eval: Table-Text IR Evaluation Collection},
author = {Doshi, Meet and Boni, Odellia and Kumar, Vishwajeet and Sen, Jaydeep and Joshi, Sachindra},
year = {2026},
institution = {IBM Research},
howpublished = {https://huggingface.co/collections/ibm-research/table-text-ir-evaluation},
note = {Hugging Face dataset collection}
}
All credit goes to original authors. Please cite their work:
@inproceedings{lu-etal-2023-statcan,
title = "The {S}tat{C}an Dialogue Dataset: Retrieving Data Tables through Conversations with Genuine Intents",
author = "Lu, Xing Han and
Reddy, Siva and
de Vries, Harm",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.eacl-main.206/",
doi = "10.18653/v1/2023.eacl-main.206",
pages = "2799--2829",
abstract = "We introduce the StatCan Dialogue Dataset consisting of 19,379 conversation turns between agents working at Statistics Canada and online users looking for published data tables. The conversations stem from genuine intents, are held in English or French, and lead to agents retrieving one of over 5000 complex data tables. Based on this dataset, we propose two tasks: (1) automatic retrieval of relevant tables based on a on-going conversation, and (2) automatic generation of appropriate agent responses at each turn. We investigate the difficulty of each task by establishing strong baselines. Our experiments on a temporal data split reveal that all models struggle to generalize to future conversations, as we observe a significant drop in performance across both tasks when we move from the validation to the test set. In addition, we find that response generation models struggle to decide when to return a table. Considering that the tasks pose significant challenges to existing models, we encourage the community to develop models for our task, which can be directly used to help knowledge workers find relevant tables for live chat users."
}