Datasets:
language:
- en
license: cc-by-sa-4.0
tags:
- retrieval
- text-retrieval
- beir
- stack-exchange
- community-question-answering
- duplicate-questions
- benchmark
pretty_name: BEIR CQADupStack English (retrieval)
size_categories: 1K<n<10K
task_categories:
- text-retrieval
CQADupStack English (BEIR) — duplicate-question retrieval
Dataset description
CQADupStack is a benchmark for community question answering (cQA) built from publicly available Stack Exchange content. It was introduced by Hoogeveen, Verspoor, and Baldwin at ADCS 2015 as a resource for studying duplicate questions: threads and posts are organized so that systems can be trained and evaluated on finding prior questions that match (or semantically duplicate) a newly asked question—central to reducing fragmentation and improving search on Q&A sites.
The original release aggregates material across twelve Stack Exchange forums (e.g., English, gaming, programmers) with annotations linking questions marked as duplicates in the platform’s moderation workflow, together with predefined splits so different papers remain comparable.
BEIR (Benchmarking IR) repackaged CQADupStack—along with many other public corpora—as a standard retrieval benchmark for zero-shot evaluation of lexical, sparse, dense, and hybrid retrievers across heterogeneous tasks. In the BEIR formulation, CQADupStack (English) is a duplicate-question retrieval setting: the “documents” are questions (or question-like posts) from the corpus, and the task is to rank the true duplicate(s) for each query highly.
This repository (orgrctera/beir_cqadupstack_english) exposes the benchmark in Parquet form for retrieval evaluation pipelines. Each row is one query with relevance judgments (expected_output) pointing at corpus document identifiers, aligned with the BEIR CQADupStack English test split.
Scale (this Hub snapshot)
The published split in this dataset is:
| Split | Rows |
|---|---|
test |
1,570 |
The underlying corpus in BEIR is large (on the order of hundreds of thousands of short documents—typical BEIR “100K–1M” bucket for CQADupStack overall). Full retrieval evaluation requires indexing that corpus and scoring queries against it; this card describes the query + qrels side packaged for CTERA-style evaluation rows.
Task: retrieval (CQADupStack English)
The task is ad hoc retrieval specialized to duplicate question finding:
- Input: a natural-language question (the query)—often phrased as a user would post on Stack Exchange.
- Output: a ranked list of document IDs from the CQADupStack English corpus (or scores over the full collection), such that relevant IDs—those marked as duplicates in the official qrels—appear at the top.
Standard IR metrics apply (e.g., nDCG@k, Recall@k, MRR), using the provided qrels as ground truth.
Note: Align
expected_outputdocument IDs with the same BEIR CQADupStack English corpus you use for indexing (same ID space as the upstream BEIR release).
Data format (this repository)
Each record includes:
| Field | Description |
|---|---|
id |
UUID for this example row. |
input |
The query text (Stack Exchange–style question). |
expected_output |
JSON string: list of objects {"id": "<corpus-doc-id>", "score": <relevance>}. Scores follow the BEIR qrels convention (typically 1 for relevant in binary settings). A query may have one or more relevant documents. |
metadata.query_id |
Original BEIR query identifier (string). |
metadata.split |
Split name; in this dataset, test. |
Example 1 (single relevant document)
{
"id": "cd09dee3-e42e-459c-ab83-3e57654ee31e",
"input": "Is it absolutely necessary to use \"than\" over \"then\" in a comparison?",
"expected_output": "[{\"id\": \"72699\", \"score\": 1}]",
"metadata.query_id": "14613",
"metadata.split": "test"
}
Example 2 (multiple relevant documents)
{
"id": "52276646-97cc-42ed-88ad-a2b60eea5e5c",
"input": "What is the best answer to the question \"How are you\" in business meetings?",
"expected_output": "[{\"id\": \"22320\", \"score\": 1}, {\"id\": \"105252\", \"score\": 1}, {\"id\": \"140049\", \"score\": 1}, {\"id\": \"74832\", \"score\": 1}]",
"metadata.query_id": "101972",
"metadata.split": "test"
}
References
CQADupStack (original dataset)
Doris Hoogeveen, Karin M. Verspoor, Timothy Baldwin
CQADupStack: A Benchmark Data Set for Community Question-Answering Research
Proceedings of the 20th Australasian Document Computing Symposium (ADCS 2015), Parramatta, NSW, Australia.
- DOI: 10.1145/2838931.2838934
- Anthology entry (metadata & BibTeX): IR Anthology — Hoogeveen et al., 2015
The paper motivates duplicate-question tasks on real Stack Exchange communities and describes the construction of CQADupStack from a Stack Exchange data dump (historical releases are cited in the original work), including duplicate links and evaluation protocols suited to retrieval and classification experiments.
BEIR benchmark (CQADupStack as one of 18 datasets)
Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych
BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models
NeurIPS 2021 (Datasets and Benchmarks Track).
Abstract (from arXiv): “Existing neural information retrieval (IR) models have often been studied in homogeneous and narrow settings, which has considerably limited insights into their out-of-distribution (OOD) generalization capabilities. To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval. We leverage a careful selection of 18 publicly available datasets from diverse text retrieval tasks and domains and evaluate 10 state-of-the-art retrieval systems including lexical, sparse, dense, late-interaction and re-ranking architectures on the BEIR benchmark. Our results show BM25 is a robust baseline and re-ranking and late-interaction-based models on average achieve the best zero-shot performances, however, at high computational costs. In contrast, dense and sparse-retrieval models are computationally more efficient but often underperform other approaches, highlighting the considerable room for improvement in their generalization capabilities.”
- Paper: arXiv:2104.08663 — OpenReview (NeurIPS 2021 Datasets & Benchmarks); code and data: BEIR on GitHub.
Related resources
- Raw BEIR-style mirrors on Hugging Face (corpus / queries / qrels in classic layouts), e.g. datasets under the BeIR organization naming
cqadupstack*, for JSONL + TSV packaging consistent with the upstream benchmark. - MTEB also lists CQADupStack variants (e.g., English) for embedding evaluation—useful for cross-checking task definitions and statistics: MTEB on Hugging Face.
Citation
If you use CQADupStack, cite the ADCS 2015 paper above. If you use the BEIR packaging or evaluation protocol, cite the BEIR NeurIPS 2021 paper. If you use this Parquet export, cite both the original data sources and BEIR as appropriate for your experiment.
License
Stack Exchange content is typically distributed under Creative Commons terms; BEIR and downstream cards commonly reference cc-by-sa-4.0. Verify against your corpus snapshot and upstream Stack Exchange / BEIR terms if you need strict compliance.
Dataset card maintained for the orgrctera/beir_cqadupstack_english Hub repository.