metadata
license: apache-2.0
task_categories:
- text-retrieval
- feature-extraction
language:
- en
tags:
- search
- benchmark
- information-retrieval
- full-text-search
size_categories:
- 1M<n<10M
configs:
- config_name: corpus
data_files:
- split: train
path: corpus.parquet
- config_name: queries
data_files:
- split: train
path: queries.parquet
dataset_info:
- config_name: corpus
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_examples: 5032104
- config_name: queries
features:
- name: query
dtype: string
- name: tags
sequence: string
splits:
- name: train
num_examples: 903
Search Benchmark Dataset
A benchmark dataset for evaluating full-text search engines, derived from the search-benchmark-game project.
Dataset Description
This dataset contains a corpus of Wikipedia articles and a set of search queries designed to benchmark different search engine implementations.
Corpus
- Size: 5,032,104 documents
- Source: English Wikipedia
- Fields:
id: Wikipedia article URL (e.g.,https://en.wikipedia.org/wiki?curid=48687903)text: Article content (lowercase, cleaned text)
Queries
- Size: 903 queries
- Source: Derived from the AOL query dataset (filtered, no personal information)
- Fields:
query: Search query stringtags: List of query characteristics
Query Types
| Type | Syntax | Example | Description |
|---|---|---|---|
term |
word |
the |
Single term query |
phrase |
"word1 word2" |
"griffith observatory" |
Exact phrase match (words must appear consecutively) |
intersection |
+word1 +word2 |
+griffith +observatory |
AND query (all terms must appear, position-independent) |
union |
word1 word2 |
griffith observatory |
OR query (any term can appear) |
Query Tags
| Tag | Description |
|---|---|
term |
Single term query |
phrase |
Phrase query requiring consecutive word positions |
intersection |
Boolean AND query |
union |
Boolean OR query |
global |
Position-independent matching (used with intersection/union, as opposed to phrase which requires consecutive positions) |
num_token_N |
Query contains N tokens |
intersection:num_tokens_N |
Intersection query with N tokens |
phrase:num_tokens_N |
Phrase query with N tokens |
union:num_tokens_N |
Union query with N tokens |
union:num_tokens_>3 |
Union query with more than 3 tokens |
two-phase-critical |
Queries requiring two-phase execution |
Usage
from datasets import load_dataset
# Load corpus
corpus = load_dataset("WenxingZhu/search-benchmark-dataset", "corpus", split="train")
print(f"Corpus size: {len(corpus)}")
print(corpus[0])
# Load queries
queries = load_dataset("WenxingZhu/search-benchmark-dataset", "queries", split="train")
print(f"Number of queries: {len(queries)}")
print(queries[0])
Dataset Statistics
| Config | Documents | Fields |
|---|---|---|
| corpus | 5,032,104 | id, text |
| queries | 903 | query, tags |
Benchmark Details
The corpus is the English Wikipedia with stemming disabled. Queries have been filtered from the AOL query dataset to include only those with at least two terms that yield at least one hit as a phrase query.
Query types tested include:
- Intersection queries: Multiple terms with AND semantics (all terms must appear)
- Union queries: Multiple terms with OR semantics (any term can appear)
- Phrase queries: Exact phrase matching (terms must appear consecutively)
Collection options:
COUNT: Only count documentsTOP 10: Retrieve 10 documents with best BM25 scoreTOP 10 + COUNT: Top 10 documents + total count
Source
This dataset is derived from the search-benchmark-game project by Quickwit, which benchmarks various search engines including Tantivy, Lucene, PISA, and others.
License
Apache 2.0