File size: 7,342 Bytes
fbcb80d cc2f016 ff0b397 fe57b3e cc2f016 fbcb80d ff0b397 cc2f016 ff0b397 cc2f016 ff0b397 cc2f016 2c2c998 cc2f016 2c2c998 cc2f016 ff0b397 cc2f016 ff0b397 cc2f016 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 | ---
license: cc-by-sa-4.0
language:
- en
pretty_name: BEIR CQADupStack GIS (Retrieval)
size_categories:
- "n<1K"
tags:
- information-retrieval
- beir
- retrieval
- duplicate-question-retrieval
- stack-exchange
- gis
- geographic-information-systems
- rag
---
# BEIR CQADupStack — GIS (`orgrctera/beir_cqadupstack_gis`)
## Overview
This release packages the **GIS** slice of **CQADupStack** from the [**BEIR**](https://github.com/beir-cellar/beir) (Benchmarking IR) benchmark as a table-oriented dataset for **retrieval** evaluation and tooling (e.g. Langfuse-exported runs).
**CQADupStack** is a collection of **duplicate-question retrieval** benchmarks built from **Stack Exchange** communities. Each subforum (including **Geographic Information Systems**, `gis`) provides a **corpus** of posts and **queries** paired with **relevance judgments** (qrels): for a given question, the task is to retrieve **other posts** marked as duplicates (or duplicate-related) in the original cQA annotations.
The **GIS** forum covers **GIS software**, **spatial analysis**, **mapping APIs** (e.g. QGIS, ArcGIS, PostGIS, web mapping), and related workflows—so queries and documents mix **technical jargon**, **tool names**, and **problem descriptions**, which is typical for community Q&A IR.
**BEIR** standardizes CQADupStack into **per-subforum** configurations (e.g. `cqadupstack/android`, `cqadupstack/gis`). This Hub dataset is the **CTERA-formatted** **`gis`** **test** collection: **one row per query** with **gold relevant document IDs** in `expected_output`, aligned with BEIR’s **CQADupStack/gis** split.
**Source lineage:** [CQADupStack (Melbourne)](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) → [BEIR `cqadupstack`](https://github.com/beir-cellar/beir) (sub-corpus `gis`) → `orgrctera/beir_cqadupstack_gis`.
## Task
- **Task type:** **Retrieval** — **duplicate-question retrieval** on the **CQADupStack GIS** subforum (BEIR naming: **CQADupStack/gis**).
- **Input (`input`):** A **query** string: the text of a Stack Exchange **question** (natural language).
- **Reference (`expected_output`):** A JSON **string**: list of objects `{"id": "…", "score": 1}` (BEIR corpus document id in the `id` field) — **binary** relevance from BEIR qrels (duplicate or duplicate-related judgments as packaged by BEIR).
- **Metadata:** `metadata.query_id` is the BEIR **query** identifier; `metadata.split` is **`test`** (this release mirrors the BEIR **test** split for `gis`).
The retrieval system is evaluated by ranking the **full BEIR GIS corpus** (not stored row-wise here) and comparing retrieved IDs to these gold lists using standard IR metrics (**nDCG@k**, **MAP**, **Recall@k**, etc.), consistent with [BEIR evaluation](https://github.com/beir-cellar/beir).
## Background
### CQADupStack (original dataset)
**CQADupStack** (Hoogeveen, Verspoor & Baldwin, ADCS 2015) is a benchmark for **community question answering** research. It aggregates threads from **twelve** Stack Exchange sites from a **2014** data dump, with annotations linking **duplicate questions**. The resource supports both **retrieval** and **classification** experiments and includes standard splits for reproducibility.
> *Abstract (paraphrased):* The authors present CQADupStack, a new benchmark dataset for research on **duplicate question detection** and **retrieval** in community forums, based on Stack Exchange and designed to support rigorous comparison of methods across subcommunities.
### BEIR reformulation
**BEIR** (Thakur et al., NeurIPS 2021 Datasets & Benchmarks) repackages CQADupStack subforums as separate retrieval configurations. Each has a **corpus** (JSONL: `_id`, `title`, `text`), **queries** (JSONL), and **qrels** (TSV). The **GIS** subset targets **geospatial** Q&A: vocabulary and semantics differ from e.g. programming-only forums, which helps measure **domain transfer** for retrievers.
### This release
Rows were **exported from Langfuse** (CTERA AI evaluation pipeline) in a flat schema: **885** **test** queries for **`gis`**, each with **gold document IDs** in `expected_output`.
## Data fields
| Column | Type | Description |
|--------|------|-------------|
| `id` | `string` | Stable UUID for this row in this Hub release. |
| `input` | `string` | Query text (duplicate-question retrieval query). |
| `expected_output` | `string` | JSON string: list of objects with BEIR corpus `id` and binary `score` (typically `1`). |
| `metadata.query_id` | `string` | BEIR query identifier for this row. |
| `metadata.split` | `string` | Split name (`test`). |
## Splits
| Split | Rows |
|-------|------|
| `test` | 885 |
## Examples
**Example 1 — single gold document**
- **`input`:** `qgis2 slope analysis from NED data gives crazy histogram`
- **`metadata.query_id`:** `79803`
- **`metadata.split`:** `test`
- **`expected_output`:**
```json
[{"id": "91868", "score": 1}]
```
**Example 2 — multiple gold documents**
- **`input`:** `Hardware requirements for a modern GIS workstation`
- **`metadata.query_id`:** `112391`
- **`expected_output`:**
```json
[
{"id": "22371", "score": 1},
{"id": "29168", "score": 1},
{"id": "23118", "score": 1}
]
```
**Example 3 — PostGIS / spatial query**
- **`input`:** `Using PostGIS to find airplanes flying on the same routes over the sea`
- **`metadata.query_id`:** `89922`
- **`expected_output`:**
```json
[{"id": "59729", "score": 1}]
```
## References and citations
### BEIR benchmark
> Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych. **BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models.** *NeurIPS 2021 Datasets and Benchmarks Track.*
- Paper: [OpenReview](https://openreview.net/forum?id=wCu6T5xFjeJ) · [arXiv:2104.08663](https://arxiv.org/abs/2104.08663)
- Code: [beir-cellar/beir](https://github.com/beir-cellar/beir)
```bibtex
@inproceedings{thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Thakur, Nandan and Reimers, Nils and R{\"u}ckl{\'e}, Andreas and Srivastava, Abhishek and Gurevych, Iryna},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### CQADupStack (original dataset)
> Doris Hoogeveen, Karin M. Verspoor, Timothy D. Baldwin. **CQADupStack: A Benchmark Data Set for Community Question-Answering Research.** *Proceedings of the 20th Australasian Document Computing Symposium (ADCS),* 2015.
- Resource page: [University of Melbourne — CQADupStack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Anthology: [IR Anthology ADCS 2015](https://ir.webis.de/anthology/2015.adcs_conference-2015.3/)
```bibtex
@inproceedings{hoogeveen2015cqadupstack,
title={{CQADupStack}: A Benchmark Data Set for Community Question-Answering Research},
author={Hoogeveen, Doris and Verspoor, Karin M. and Baldwin, Timothy D.},
booktitle={Proceedings of the 20th Australasian Document Computing Symposium},
year={2015}
}
```
### Stack Exchange licensing
Underlying content is subject to Stack Exchange **CC BY-SA** terms; see [Stack Exchange data licensing](https://stackoverflow.com/help/licensing).
|