Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Libraries:
Datasets
pandas
License:
File size: 4,008 Bytes
3d67177
 
a19970b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b0514bc
a19970b
 
b0514bc
a19970b
b0514bc
a19970b
b0514bc
a19970b
b0514bc
a19970b
b0514bc
a19970b
 
b0514bc
a19970b
b0514bc
a19970b
b0514bc
a19970b
b0514bc
a19970b
b0514bc
a19970b
b0514bc
a19970b
93472ef
 
 
 
 
 
 
 
 
 
 
 
a19970b
3d67177
a19970b
 
 
 
 
 
 
 
93472ef
a19970b
93472ef
 
 
a19970b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93472ef
a19970b
 
 
 
93472ef
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---

license: apache-2.0
language:
  - en
pretty_name: HSEB MSMARCO benchmarking dataset
dataset_info:
  features:
  - name: text
    dtype: string
  - name: id
    dtype: int64
  - name: embedding
    sequence: float32
  - name: results_10_docs
    sequence: int64
  - name: results_10_scores
    sequence: float32
  - name: results_90_docs
    sequence: int64
  - name: results_90_scores
    sequence: float32
  - name: results_100_docs
    sequence: int64
  - name: results_100_scores
    sequence: float32
  - name: tag
    sequence: int64
configs:
  - config_name: "query-all-MiniLM-L6-v2-1K"
    data_files: "data/all-MiniLM-L6-v2/1K/queries.jsonl.gz"
    default: true
  - config_name: "corpus-all-MiniLM-L6-v2-1K"
    data_files: "data/all-MiniLM-L6-v2/1K/corpus.jsonl.gz"
  - config_name: "query-all-MiniLM-L6-v2-100K"
    data_files: "data/all-MiniLM-L6-v2/100K/queries.jsonl.gz"
  - config_name: "corpus-all-MiniLM-L6-v2-100K"
    data_files: "data/all-MiniLM-L6-v2/100K/corpus.jsonl.gz"
  - config_name: "query-all-MiniLM-L6-v2-1M"
    data_files: "data/all-MiniLM-L6-v2/1M/queries.jsonl.gz"
  - config_name: "corpus-all-MiniLM-L6-v2-1M"
    data_files: "data/all-MiniLM-L6-v2/1M/corpus.jsonl.gz"

  - config_name: "query-e5-base-v2-1K"
    data_files: "data/e5-base-v2/1K/queries.jsonl.gz"
  - config_name: "corpus-e5-base-v2-1K"
    data_files: "data/e5-base-v2/1K/corpus.jsonl.gz"
  - config_name: "query-e5-base-v2-100K"
    data_files: "data/e5-base-v2/100K/queries.jsonl.gz"
  - config_name: "corpus-e5-base-v2-100K"
    data_files: "data/e5-base-v2/100K/corpus.jsonl.gz"
  - config_name: "query-e5-base-v2-1M"
    data_files: "data/e5-base-v2/1M/queries.jsonl.gz"
  - config_name: "corpus-e5-base-v2-1M"
    data_files: "data/e5-base-v2/1M/corpus.jsonl.gz"

  - config_name: "query-Qwen3-Embedding-4B-1K"
    data_files: "data/Qwen3-Embedding-4B/1K/queries.jsonl.gz"
  - config_name: "corpus-Qwen3-Embedding-4B-1K"
    data_files: "data/Qwen3-Embedding-4B/1K/corpus.jsonl.gz"
  - config_name: "query-Qwen3-Embedding-4B-100K"
    data_files: "data/Qwen3-Embedding-4B/100K/queries.jsonl.gz"
  - config_name: "corpus-Qwen3-Embedding-4B-100K"
    data_files: "data/Qwen3-Embedding-4B/100K/corpus.jsonl.gz"
  - config_name: "query-Qwen3-Embedding-4B-1M"
    data_files: "data/Qwen3-Embedding-4B/1M/queries.jsonl.gz"
  - config_name: "corpus-Qwen3-Embedding-4B-1M"
    data_files: "data/Qwen3-Embedding-4B/1M/corpus.jsonl.gz"

---


# HSEB MSMARCO benchmarking dataset

This collection is based on [MSMARCO](TODO) dataset:

* Embedding models: 
    * 384 dims: [sentence-transformers/all-MiniLM-L6-v2](TODO)
    * 768 dims: [intfloat/e5-base-v2](TODO)
    * 2560 dims: [Qwen3-Embedding-4B](TODO)
* Splits:
    * `1K`: 1K documents, 10K queries
    * `100K`: 100K documents, 10K queries
    * `1M`: 1M documents, 10K queries
* Filter selectivity:
    * `10%` for high selectivity, `90%` for low selectivity, `100%` for no filters at all
    * each document has a tag based on sampled selectivity, so 10% of docs have a tag `10`, and 50% of docs have tag `50`
* Exact match results:
    * for each selectivity level for each query there are precomputed exact k-NN search results for top-100 documents.


## Dataset structure

The dataset can be loaded with the [Huggingface datasets](TODO) library:

```python

from datasets import load_dataset



query = load_dataset("hseb-benchmark/msmarco", "query-all-MiniLM-L6-v2-1M")

corpus = load_dataset("hseb-benchmark/msmarco", "corpus-all-MiniLM-L6-v2-1M")

```

where 2nd argument to `load_dataset` is a config name, consisting of the following parts:

```

<query|corpus>-<model>-<size>

```

1. `query` | `corpus` - which side of the dataset you're loading
2. `model`: embedding model, `all-MiniLM-L6-v2` (384 dims), `e5-base-v2` (768 dims) or `Qwen3-Embedding-4B` (2560 dims)
3. `size`: sample size, `1K`, `100K`, `1M`

## License

Apache 2.0