Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
File size: 9,125 Bytes
c6d1ca2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f1fcb22
c6d1ca2
 
 
 
 
 
 
 
 
 
 
 
92aa6ca
 
c6d1ca2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3dac803
 
 
88efd2c
 
 
c6d1ca2
 
 
 
92aa6ca
c6d1ca2
 
92aa6ca
c6d1ca2
 
92aa6ca
c6d1ca2
88efd2c
 
 
c6d1ca2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1c668cf
 
88efd2c
 
c6d1ca2
 
 
 
 
 
 
 
88efd2c
 
3b62635
 
 
 
7616dda
 
 
 
 
 
 
 
 
3b62635
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7616dda
3b62635
 
 
 
 
 
 
 
 
 
 
 
 
76fe71a
 
 
 
 
 
3b62635
 
 
 
 
 
 
76fe71a
 
 
 
 
3b62635
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9e93009
3b62635
 
9e93009
3b62635
 
9e93009
3b62635
 
 
 
 
 
 
9a432ad
 
 
 
 
 
 
 
 
3b62635
 
 
 
9e93009
3b62635
 
 
 
 
9e93009
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
---
language:
- en
task_categories:
- text-retrieval
task_ids:
- document-retrieval
config_names:
- corpus
tags:
- text-retrieval
dataset_info:
  - config_name: qrels
    features:
      - name: query-id
        dtype: string
      - name: corpus-id
        dtype: string
      - name: type
        dtype: string
    splits:
      - name: passage
        num_bytes: 598330
        num_examples: 7150
      - name: document
        num_bytes: 485624
        num_examples: 6050
  - config_name: queries
    features:
      - name: _id
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: test
        num_bytes: 4781
        num_examples: 76
  - config_name: corpus
    features:
      - name: _id
        dtype: string
      - name: title
        dtype: string
      - name: headings
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: pass_core
        num_bytes: 2712089
        num_examples: 7126
      - name: pass_10k
        num_bytes: 1065541
        num_examples: 2874
      - name: pass_100k
        num_bytes: 33382351
        num_examples: 90000
      - name: pass_1M
        num_bytes: 333466010
        num_examples: 900000
      - name: pass_10M
        num_bytes: 3332841963
        num_examples: 9000000
      - name: pass_100M
        num_bytes: 33331696935
        num_examples: 90000000
      - name: doc_core
        num_bytes: 91711400
        num_examples: 6032
      - name: doc_10k
        num_bytes: 38457420
        num_examples: 3968
      - name: doc_100k
        num_bytes: 883536440
        num_examples: 90000
      - name: doc_1M
        num_bytes: 8850694962
        num_examples: 900000
      - name: doc_10M
        num_bytes: 88689338934
        num_examples: 9000000
configs:
  - config_name: qrels
    data_files:
      - split: passage
        path: qrels/passage.jsonl
      - split: document
        path: qrels/document.jsonl
  - config_name: queries
    data_files:
      - split: test
        path: queries.jsonl
  - config_name: corpus
    data_files:
      - split: pass_core
        path: passage/corpus_core.jsonl
      - split: pass_10k
        path: passage/corpus_10000.jsonl
      - split: pass_100k
        path: passage/corpus_100000.jsonl
      - split: pass_1M
        path: passage/corpus_1000000.jsonl
      - split: pass_10M
        path: passage/corpus_10000000_*.jsonl
      - split: pass_100M
        path: passage/corpus_100000000_*.jsonl
      - split: doc_core
        path: document/corpus_core.jsonl
      - split: doc_10k
        path: document/corpus_10000.jsonl
      - split: doc_100k
        path: document/corpus_100000.jsonl
      - split: doc_1M
        path: document/corpus_1000000.jsonl
      - split: doc_10M
        path: document/corpus_10000000_*.jsonl
---
<h1 align="center">CoRE: Controlled Retrieval Evaluation Dataset</h1>
<h4 align="center">
    <p>
        <a href=#🔍-motivation>Motivation</a> |
        <a href=#📦-dataset-overview>Dataset Overview</a> |
        <a href=#🏗-dataset-construction>Dataset Construction</a>  |
        <a href="#🧱-dataset-structure">Dataset Structure</a> |
        <a href="#🏷-qrels-format">Qrels Format</a> |
        <a href="#📊-evaluation">Evaluation</a> |
        <a href="#📜-citation">Citation</a> |
        <a href="#🔗-links">Links</a> |
        <a href="#📬-contact">Contact</a>
    <p>
</h4>

**CoRE** (Controlled Retrieval Evaluation) is a benchmark dataset designed for the rigorous evaluation of embedding compression techniques in information retrieval.
<!-- It isolates and controls critical variables—query relevance, distractor density, corpus size, and document length—to facilitate meaningful comparisons of retrieval performance across different embedding configurations. -->

## 🔍 Motivation

Embedding compression is essential for scaling modern retrieval systems, but its effects are often evaluated under overly simplistic conditions. CoRE addresses this by offering a collection of corpora with:

* Multiple document lengths (passage and document) and sizes (10k to 100M)
* Fixed number of **relevant** and **distractor** documents per query
* Realistic evaluation grounded in TREC DL human relevance labels

This evaluation framework goes beyond, e.g., the benchmark used in the paper "The Curse of Dense Low-Dimensional Information Retrieval for Large Index Sizes" from [Reimers and Gurevych (2021)](https://doi.org/10.18653/v1/2021.acl-short.77), which disregards different document lengths and employs a less advanced random sampling, hence creating a less realistic experimental setup.

## 📦 Dataset Overview

CoRE builds on MS MARCO v2 and introduces high-quality distractors using pooled system runs from [TREC 2023 Deep Learning Track](https://microsoft.github.io/msmarco/TREC-Deep-Learning.html). We ensure consistent query difficulty across different corpus sizes and document types. This overcomes the limitations of randomly sampled corpora, which can lead to trivial retrieval tasks, as no distractors are present in smaller datasets.

| Document Type | # Queries | Corpus Sizes             |
| ------------- | --------- | ------------------------ |
| Passage       | 65        | 10k, 100k, 1M, 10M, 100M |
| Document      | 55        | 10k, 100k, 1M, 10M       |

For each query:

* **10 relevant documents**
* **100 high-quality distractors**, selected via Reciprocal Rank Fusion (RRF) from top TREC system runs (bottom 20% of runs excluded)

## 🏗 Dataset Construction

To avoid trivializing the retrieval task when reducing corpus size, CoRE follows the intelligent **corpus subsampling strategy** proposed by [Fröbe et al. (2025)](https://doi.org/10.1007/978-3-031-88708-6_29). This method is used to mine distractors from pooled ranking lists. These distractors are then included in all corpora of CoRE, ensuring a fixed *query difficulty*—unlike naive random sampling, where the number of distractors would decrease with corpus size.

Steps for both passage and document retrieval:

1. Start from MS MARCO v2 annotations
2. For each query:

   * Retain 10 relevant documents
   * Mine 100 distractors from RRF-fused rankings of top TREC 2023 DL submissions
3. Construct multiple corpus scales by aggregating relevant documents and distractors with randomly sampled filler documents

## 🧱 Dataset Structure

The dataset consists of three subsets: `queries`, `qrels`, and `corpus`.

* **queries**: contains only one split (`test`)
* **qrels**: contains two splits: `passage` and `document`
* **corpus**: contains 11 splits, detailed below:

<div style="display: flex; gap: 2em;">

<table>
  <caption><strong>Passage Corpus Splits</strong></caption>
  <thead><tr><th>Split</th><th># Documents</th></tr></thead>
  <tbody>
    <tr><td>pass_core</td><td>~7,130</td></tr>
    <tr><td>pass_10k</td><td>~2,870</td></tr>
    <tr><td>pass_100k</td><td>90,000</td></tr>
    <tr><td>pass_1M</td><td>900,000</td></tr>
    <tr><td>pass_10M</td><td>9,000,000</td></tr>
    <tr><td>pass_100M</td><td>90,000,000</td></tr>
  </tbody>
</table>

<table>
  <caption><strong>Document Corpus Splits</strong></caption>
  <thead><tr><th>Split</th><th># Documents</th></tr></thead>
  <tbody>
    <tr><td>doc_core</td><td>~6,030</td></tr>
    <tr><td>doc_10k</td><td>~3,970</td></tr>
    <tr><td>doc_100k</td><td>90,000</td></tr>
    <tr><td>doc_1M</td><td>900,000</td></tr>
    <tr><td>doc_10M</td><td>9,000,000</td></tr>
  </tbody>
</table>

</div>

> Note: The `_core` splits contain only relevant and distractor documents. All other splits are topped up with randomly sampled documents to reach the target size.

## 🏷 Qrels Format

The `qrels` files in CoRE differ from typical IR datasets. Instead of the standard relevance grading (e.g., 0, 1, 2), CoRE uses two distinct labels:

* `relevant` (10 documents per query)
* `distractor` (100 documents per query)

This enables focused evaluation of model sensitivity to compression under tightly controlled relevance and distractor distributions.

## 📊 Evaluation

```python
from datasets import load_dataset

# Load queries
queries = load_dataset("PaDaS-Lab/CoRE", name="queries", split="test")

# Load relevance judgments
qrels = load_dataset("PaDaS-Lab/CoRE", name="qrels", split="passage")

# Load a 100k-scale corpus for passage retrieval
corpus = load_dataset("PaDaS-Lab/CoRE", name="corpus", split="pass_100k")
```

## 📜 Citation

If you use CoRE in your research, please cite:

```bibtex
@misc{caspari2025corect,
  title={CoRECT: A Framework for Evaluating Embedding Compression Techniques at Scale}, 
  author={L. Caspari and M. Dinzinger and K. Ghosh Dastidar and C. Fellicious and J. Mitrović and M. Granitzer},
  year={2025},
  eprint={2510.19340},
  archivePrefix={arXiv},
  primaryClass={cs.IR},
  url={https://arxiv.org/abs/2510.19340}, 
}
```

## 🔗 Links

* [Paper](https://arxiv.org/pdf/2510.19340)
* [MS MARCO](https://microsoft.github.io/msmarco/)
* [TREC](https://trec.nist.gov/)

## 📬 Contact

For questions or collaboration opportunities, contact us at `michael.dinzinger@uni-passau.de`.