Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
michaeldinzinger commited on
Commit
3b62635
·
verified ·
1 Parent(s): 88efd2c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +136 -1
README.md CHANGED
@@ -114,4 +114,139 @@ configs:
114
  path: document/corpus_1000000.jsonl
115
  - split: doc_10M
116
  path: document/corpus_10000000_*.jsonl
117
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
114
  path: document/corpus_1000000.jsonl
115
  - split: doc_10M
116
  path: document/corpus_10000000_*.jsonl
117
+ ---
118
+ <h1 align="center">CoRE: Controlled Retrieval Evaluation Dataset</h1>
119
+ <h4 align="center">
120
+ <p>
121
+ <a href=#motivation>Motivation</a> |
122
+ <a href=#dataset_overview>Dataset Overview</a> |
123
+ <a href=#dataset_construction>Dataset Construction</a> |
124
+ <a href="#dataset_structure">Dataset Structure</a> |
125
+ <a href="#qrels_format">Qrels Format</a> |
126
+ <a href="#evaluation">Evaluation</a> |
127
+ <a href="#citation">Citation</a> |
128
+ <a href="#links">Links</a> |
129
+ <a href="#contact">Contact</a>
130
+ <p>
131
+ </h4>
132
+
133
+ **CoRE** (Controlled Retrieval Evaluation) is a benchmark dataset designed for the rigorous evaluation of embedding compression techniques in information retrieval.
134
+ <!-- It isolates and controls critical variables—query relevance, distractor density, corpus size, and document length—to facilitate meaningful comparisons of retrieval performance across different embedding configurations. -->
135
+
136
+ ## 🔍 Motivation
137
+
138
+ Embedding compression is essential for scaling modern retrieval systems, but its effects are often evaluated under overly simplistic conditions. CoRE addresses this by offering a collection of corpora with:
139
+
140
+ * Multiple document lengths (passage and document) and sizes (10k to 100M)
141
+ * Fixed number of **relevant** and **distractor** documents per query
142
+ * Realistic evaluation grounded in TREC DL human relevance labels
143
+
144
+ This evaluation framework goes beyond, e.g., the benchmark used in the paper "The Curse of Dense Low-Dimensional Information Retrieval for Large Index Sizes" from [Reimers and Gurevych (2021)](https://doi.org/10.18653/v1/2021.acl-short.77), which disregards different document lengths and employs a less advanced random sampling, hence creating a less realistic experimental setup.
145
+
146
+ ## 📦 Dataset Overview
147
+
148
+ CoRE builds on MS MARCO v2 and introduces high-quality distractors using pooled system runs from [TREC 2023 Deep Learning Track](https://microsoft.github.io/msmarco/TREC-Deep-Learning.html). We ensure consistent query difficulty across different corpus sizes and document types. This overcomes the limitations of randomly sampled corpora, which can lead to trivial retrieval tasks, as no distractors are present in smaller datasets.
149
+
150
+ | Document Type | # Queries | Corpus Sizes |
151
+ | ------------- | --------- | ------------------------ |
152
+ | Passage | 65 | 10k, 100k, 1M, 10M, 100M |
153
+ | Document | 55 | 10k, 100k, 1M, 10M |
154
+
155
+ For each query:
156
+
157
+ * **10 relevant documents**
158
+ * **100 high-quality distractors**, selected via Reciprocal Rank Fusion (RRF) from top TREC system runs (bottom 20% of runs excluded)
159
+
160
+ ## 🏗 Dataset Construction
161
+
162
+ To avoid trivializing the retrieval task when reducing corpus size, CoRE follows the intelligent **corpus subsampling strategy** proposed by [Fröbe et al. (2025)](https://doi.org/10.1007/978-3-031-88708-6_29). This method is used to mine distractors from pooled ranking lists. These distractors are then included in all corpora of CoRE, ensuring a fixed *query difficulty*—unlike naive random sampling, where the number of distractors would decrease with corpus size.
163
+
164
+ Steps for both passage and document retrieval:
165
+
166
+ 1. Start from MS MARCO v2 annotations
167
+ 2. For each query:
168
+
169
+ * Retain 10 relevant documents
170
+ * Mine 100 distractors from RRF-fused rankings of top TREC 2023 DL submissions
171
+ 3. Construct multiple corpus scales by aggregating relevant documents and distractors with randomly sampled filler documents
172
+
173
+ ## 🧱 Dataset Structure (Hugging Face Format)
174
+
175
+ The dataset consists of three subsets: `queries`, `qrels`, and `corpus`.
176
+
177
+ * **queries**: contains only one split (`test`)
178
+ * **qrels**: contains two splits: `passage` and `document`
179
+ * **corpus**: contains 11 splits, detailed below:
180
+
181
+ <div style="display: flex; gap: 2em;">
182
+
183
+ <table>
184
+ <caption><strong>Passage Corpus Splits</strong></caption>
185
+ <thead><tr><th>Split</th><th># Documents</th></tr></thead>
186
+ <tbody>
187
+ <tr><td>pass_core</td><td>~7,150</td></tr>
188
+ <tr><td>pass_10k</td><td>10,000</td></tr>
189
+ <tr><td>pass_100k</td><td>100,000</td></tr>
190
+ <tr><td>pass_1M</td><td>1,000,000</td></tr>
191
+ <tr><td>pass_10M</td><td>10,000,000</td></tr>
192
+ <tr><td>pass_100M</td><td>100,000,000</td></tr>
193
+ </tbody>
194
+ </table>
195
+
196
+ <table>
197
+ <caption><strong>Document Corpus Splits</strong></caption>
198
+ <thead><tr><th>Split</th><th># Documents</th></tr></thead>
199
+ <tbody>
200
+ <tr><td>doc_core</td><td>~7,150</td></tr>
201
+ <tr><td>doc_10k</td><td>10,000</td></tr>
202
+ <tr><td>doc_100k</td><td>100,000</td></tr>
203
+ <tr><td>doc_1M</td><td>1,000,000</td></tr>
204
+ <tr><td>doc_10M</td><td>10,000,000</td></tr>
205
+ </tbody>
206
+ </table>
207
+
208
+ </div>
209
+
210
+ > Note: The `_core` splits contain only relevant and distractor documents. All other splits are topped up with randomly sampled documents to reach the target size.
211
+
212
+ ## 🏷 Qrels Format
213
+
214
+ The `qrels` files in CoRE differ from typical IR datasets. Instead of the standard relevance grading (e.g., 0, 1, 2), CoRE uses two distinct labels:
215
+
216
+ * `relevant` (10 documents per query)
217
+ * `distractor` (100 documents per query)
218
+
219
+ This enables focused evaluation of model sensitivity to compression under tightly controlled relevance and distractor distributions.
220
+
221
+ ## 📊 Evaluation
222
+
223
+ ```python
224
+ from datasets import load_dataset
225
+
226
+ # Load queries
227
+ queries = load_dataset("<anonymized>/CoRE", name="queries", split="test")
228
+
229
+ # Load relevance judgments
230
+ qrels = load_dataset("<anonymized>/CoRE", name="qrels", split="passage")
231
+
232
+ # Load a 100k-scale corpus for passage retrieval
233
+ corpus = load_dataset("<anonymized>/CoRE", name="corpus", split="pass_100k")
234
+ ```
235
+
236
+ ## 📜 Citation
237
+
238
+ If you use CoRE in your research, please cite:
239
+
240
+ ```bibtex
241
+ <anonymized>
242
+ ```
243
+
244
+ ## 🔗 Links
245
+
246
+ * [Paper (link is anonymized)](<anonymized>)
247
+ * [MS MARCO](https://microsoft.github.io/msmarco/)
248
+ * [TREC](https://trec.nist.gov/)
249
+
250
+ ## 📬 Contact
251
+
252
+ For questions or collaboration opportunities, contact `<anonymized>` at `<anonymized>`.