metadata
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: docid
dtype: string
- name: text
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 48560880327
num_examples: 14878084
download_size: 29752310440
dataset_size: 48560880327
🤗 HuggingFace |
Blog |
Slack | WeChat
OpenResearcher Corpus
This dataset contains a carefully curated ~11B-tokens corpus, which serves as an offline search engine for our data generation process, eliminating the need for external Search APIs. Details on the corpus curation process are available in our blog.
Format
Each row in the dataset contains the following fields:
- docid (string): A unique identifier for each document in the corpus.
- text (string): The complete text content of the document. Contains the full body of web pages.
- url (string): The source URL where the document was retrieved from.
How to use this dataset?
You can use this dataset together with its embeddings to build an offline search engine. Below is a pseduo code for demonstration only (for production use, consider Faiss-GPU).
# download index before
huggingface-cli download OpenResearcher/OpenResearcher-Corpus --repo-type=dataset --include="qwen3-embedding-8b/*" --local-dir ./indexes
import glob
import pickle
import faiss
import numpy as np
from datasets import load_dataset
from sentence_transformers import SentenceTransformer
# 1. Load corpus
corpus = load_dataset("OpenResearcher/OpenResearcher-Corpus", split="train")
docid_to_doc = {str(doc["docid"]): doc for doc in corpus}
# 2. Load all embedding shards from OpenResearcher-Indexes
index_files = sorted(glob.glob("path/to/indexes/*.pkl"))
all_embeddings = []
all_lookup = []
for file_path in index_files:
with open(file_path, "rb") as f:
embeddings, lookup = pickle.load(f)
all_embeddings.append(embeddings)
all_lookup.extend(lookup)
all_embeddings = np.vstack(all_embeddings).astype(np.float32)
faiss.normalize_L2(all_embeddings) # Normalize for cosine similarity
# 3. Build FAISS index
index = faiss.IndexFlatIP(all_embeddings.shape[1])
index.add(all_embeddings)
# 4. Load model and encode query
model = SentenceTransformer("Qwen/Qwen3-Embedding-8B")
query = "What is machine learning?"
query_embedding = model.encode([query], prompt_name="query")
# 5. Search in FAISS
scores, indices = index.search(query_embedding, k=5)
# 6. Print results
for idx, score in zip(indices[0], scores[0]):
docid = str(all_lookup[idx])
doc = docid_to_doc.get(docid)
if doc:
print(f"Score: {score:.4f}")
print(f"URL: {doc['url']}")
print(f"Text: {doc['text'][:200]}...\n")
Citation
@misc{li2025openresearcher,
title={OpenResearcher: A Fully Open Pipeline for Long-Horizon Deep Research Trajectory Synthesis},
author={Zhuofeng Li and Dongfu Jiang and Xueguang Ma and Haoxiang Zhang and Ping Nie and Yuyu Zhang and Kai Zou and Jianwen Xie and Yu Zhang and Wenhu Chen},
year={2025},
howpublished={\url{https://www.notion.so/OpenResearcher-A-Fully-Open-Pipeline-for-Long-Horizon-Deep-Research-Trajectory-Synthesis-2f7e290627b5800cb3a0cd7e8d6ec0ea}},
note={Notion Blog}
}