qrecc-passages / README.md
slupart's picture
Update README.md
99706cf verified
metadata
language:
  - en
pretty_name: QReCC Passage Collection

QReCC Passages (54M Web Crawl)

This repository hosts the QReCC passage collection—a raw web-crawl dataset of 54 million passages. It includes only "id" and "contents" per record, stored in compressed Parquet format for efficient loading and streaming.

Source & Context

This dataset complements the QReCC retrieval setup outlined in the Apple ML-QReCC GitHub repository. Use this passage collection as the retrieval corpus for query rewriting and conversational information-seeking tasks.

Files & Structure

Each Parquet file contains roughly 1 million passages with the following schema:

Field Type Description
id string Unique passage identifier
contents string Raw passage text (web crawl)

Files are compressed using zstd for optimal storage and performance.

Loading the Dataset

Use the Hugging Face datasets library for easy access:

from datasets import load_dataset

# Streaming mode across all shards:
ds = load_dataset("slupart/qrecc-passages", split="train", streaming=True)

# Or load them as a static dataset:
ds = load_dataset(
    "slupart/qrecc-passages",
    data_files={"train": "data/train-*.parquet"},
    split="train"
)

# Inspect
print(ds)
print(ds[0])
print(ds[1234]["contents"][:200])

Contact & Citation

If you use this dataset in academic or applied work, you can cite the original QReCC dataset and our work:

  • The original QReCC benchmark.
  • Our work DiSCo: LLM Knowledge Distillation for Efficient Sparse Retrieval in Conversational Search
@inproceedings{lupart2025disco,
  title={DiSCo: LLM Knowledge Distillation for Efficient Sparse Retrieval in Conversational Search},
  author={Lupart, Simon and Aliannejadi, Mohammad and Kanoulas, Evangelos},
  booktitle={Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval},
  pages={9--19},
  year={2025}
}