query-id
string | corpus-id
string | score
int64 |
|---|---|---|
query_0
|
Myrle Drachman
| 1
|
query_0
|
Darla Cassavant
| 1
|
query_1
|
Myrle Drachman
| 1
|
query_1
|
Delbert Lavache
| 1
|
query_2
|
Myrle Drachman
| 1
|
query_2
|
Kareen Panak
| 1
|
query_3
|
Myrle Drachman
| 1
|
query_3
|
Cherelle Dipesa
| 1
|
query_4
|
Myrle Drachman
| 1
|
query_4
|
Luka Nicora
| 1
|
query_5
|
Myrle Drachman
| 1
|
query_5
|
Tricia Bengtson
| 1
|
query_6
|
Myrle Drachman
| 1
|
query_6
|
Herminia Klukowski
| 1
|
query_7
|
Darla Cassavant
| 1
|
query_7
|
Delbert Lavache
| 1
|
query_8
|
Darla Cassavant
| 1
|
query_8
|
Kareen Panak
| 1
|
query_9
|
Darla Cassavant
| 1
|
query_9
|
Cherelle Dipesa
| 1
|
query_10
|
Darla Cassavant
| 1
|
query_10
|
Luka Nicora
| 1
|
query_11
|
Darla Cassavant
| 1
|
query_11
|
Tricia Bengtson
| 1
|
query_12
|
Darla Cassavant
| 1
|
query_12
|
Herminia Klukowski
| 1
|
query_13
|
Delbert Lavache
| 1
|
query_13
|
Kareen Panak
| 1
|
query_14
|
Delbert Lavache
| 1
|
query_14
|
Cherelle Dipesa
| 1
|
query_15
|
Delbert Lavache
| 1
|
query_15
|
Luka Nicora
| 1
|
query_16
|
Delbert Lavache
| 1
|
query_16
|
Tricia Bengtson
| 1
|
query_17
|
Delbert Lavache
| 1
|
query_17
|
Herminia Klukowski
| 1
|
query_18
|
Kareen Panak
| 1
|
query_18
|
Cherelle Dipesa
| 1
|
query_19
|
Kareen Panak
| 1
|
query_19
|
Luka Nicora
| 1
|
query_20
|
Kareen Panak
| 1
|
query_20
|
Tricia Bengtson
| 1
|
query_21
|
Kareen Panak
| 1
|
query_21
|
Herminia Klukowski
| 1
|
query_22
|
Cherelle Dipesa
| 1
|
query_22
|
Luka Nicora
| 1
|
LIMIT-small-dense
LIMIT-small-dense is a self-produced dataset created for the reproduction of the paper On the Theoretical Limitations of Embedding-Based Retrieval.
The reproduction codebase is available at https://github.com/gabor-hosu/embedding_dimension_limit.
The dataset is derived from the name and attribute distribution of the original LIMIT dataset, while following the same underlying construction principles described in the paper.
Due to hardware constraints, the dataset size was scaled down compared to the original experimental setup. The released version contains 2,000 documents and 23 queries.
Dataset Generation
The dataset was generated using the following procedure.
First, a dense binary relevance matrix is constructed, defining which documents are relevant to which queries:
import numpy as np
from itertools import combinations
def dense_matrix(num_of_queries: int, num_of_docs: int, k: int = 2) -> np.array:
all_indexes = np.arange(num_of_docs)
A = np.zeros((num_of_queries, num_of_docs), dtype=bool)
for row, combo in zip(range(num_of_queries), combinations(all_indexes, k)):
A[row, combo] = True
return A
Next, the binary relevance structure is converted into a natural-language corpus, queries, and relevance judgments following the MTEB format:
import pandas as pd
import random
def generate_dataset(
liked_items: list[str],
names: list[str],
qrel_matrix: np.ndarray,
items_per_person: int = 20,
total_num_of_docs: int = 2000,
seed: int = 42,
):
num_of_queries, num_of_docs = qrel_matrix.shape
random.seed(seed)
query_items = random.sample(liked_items, num_of_queries)
remaining_items = list(set(liked_items) - set(query_items))
doc_ids = np.array(random.sample(names, num_of_docs))
remaining_doc_ids = list(set(names) - set(doc_ids))
docs = {}
qrels_data = []
# fill up the binary qrel structure with natural language
for query_idx, (mask, item) in enumerate(zip(qrel_matrix, query_items)):
selected_doc_ids = doc_ids[mask]
for doc_id in selected_doc_ids:
doc = docs.get(doc_id)
if doc is None:
docs[doc_id] = []
doc = docs[doc_id]
doc.append(item)
qrels_data.append({
"query-id": f"query_{query_idx}",
"corpus-id": doc_id,
"score": 1
})
# add remaining items to the docs
for doc_id in docs:
num_new_items_per_docs = items_per_person - len(docs[doc_id])
new_items = random.sample(remaining_items, num_new_items_per_docs)
docs[doc_id].extend(new_items)
num_new_docs = total_num_of_docs - len(docs)
if num_new_docs > 0:
new_doc_ids = random.sample(remaining_doc_ids, num_new_docs)
docs |= {
doc_id: random.sample(remaining_items, items_per_person)
for doc_id in new_doc_ids
}
# build and return the proper mteb format
corpus = pd.DataFrame(
[
{
"_id": doc_id,
"title": "",
"text": f"{doc_id} likes {', '.join(random.sample(docs[doc_id], len(docs[doc_id])))}."
}
for doc_id in docs
]
)
queries = pd.DataFrame(
[
{
"_id": f"query_{query_idx}",
"text": f"Who likes {item}?"
}
for query_idx, item in enumerate(query_items)
]
)
qrels = pd.DataFrame(qrels_data)
return corpus, queries, qrels
A = dense_matrix(num_of_queries=23, num_of_docs=8)
corpus, queries, qrels = generate_dataset(
liked_items=liked_items,
names=names,
qrel_matrix=A,
)
corpus.to_json("corpus.jsonl", orient="records", lines=True)
queries.to_json("queries.jsonl", orient="records", lines=True)
qrels.to_json("qrels.jsonl", orient="records", lines=True)
The resulting dataset consists of a synthetic natural-language corpus, corresponding queries, and dense relevance judgments designed to stress-test embedding-based retrieval under constrained dimensionality.
- Downloads last month
- 33