ZhuofengLi commited on
Commit
fe24222
·
verified ·
1 Parent(s): 642f9ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -11
README.md CHANGED
@@ -43,21 +43,16 @@ dataset_info:
43
  **OpenResearcher** is a fully open agentic large language model (30B-A3B) designed for **long-horizon deep research** scenarios. It achieves an impressive **54.8%** accuracy on [BrowseComp-Plus](https://huggingface.co/spaces/Tevatron/BrowseComp-Plus), surpassing performance of `GPT-4.1`, `Claude-Opus-4`, `Gemini-2.5-Pro`, `DeepSeek-R1` and `Tongyi-DeepResearch`. It also demonstrates **leading performance** across a range of deep research benchmarks, including BrowseComp, GAIA, WebWalkerQA, and xbench-DeepSearch. We **fully open-source** the training and evaluation recipe—including data, model, training methodology, and evaluation framework for everyone to progress deep research.
44
 
45
  ## OpenResearcher Corpus
46
- This dataset contains a carefully curated ~11B-tokens corpus, which serves as an offline search engine for our data generation process, eliminating the need for external Search APIs.
47
 
48
- ## Format
49
  Each row in the dataset contains the following fields:
50
-
51
- qid (int64): A unique identifier for each question or task.
52
-
53
- question (string): The original deepresearch question compiled from MiroVerse.
54
-
55
- answer (string): The final answer to the question.
56
-
57
- messages (list): A list of messages representing the GPT-OSS 120B deep research trajectory, including intermediate reasoning steps, tool calls, observations, and model responses throughout the problem-solving process.
58
 
59
  ## How to use this dataset?
60
-
61
 
62
 
63
 
 
43
  **OpenResearcher** is a fully open agentic large language model (30B-A3B) designed for **long-horizon deep research** scenarios. It achieves an impressive **54.8%** accuracy on [BrowseComp-Plus](https://huggingface.co/spaces/Tevatron/BrowseComp-Plus), surpassing performance of `GPT-4.1`, `Claude-Opus-4`, `Gemini-2.5-Pro`, `DeepSeek-R1` and `Tongyi-DeepResearch`. It also demonstrates **leading performance** across a range of deep research benchmarks, including BrowseComp, GAIA, WebWalkerQA, and xbench-DeepSearch. We **fully open-source** the training and evaluation recipe—including data, model, training methodology, and evaluation framework for everyone to progress deep research.
44
 
45
  ## OpenResearcher Corpus
46
+ This dataset contains a carefully curated ~11B-tokens corpus, which serves as an offline search engine for our data generation process, eliminating the need for external Search APIs. Details on the corpus curation process are available in our [blog](https://boiled-honeycup-4c7.notion.site/OpenResearcher-A-Fully-Open-Pipeline-for-Long-Horizon-Deep-Research-Trajectory-Synthesis-2f7e290627b5800cb3a0cd7e8d6ec0ea?source=copy_link).
47
 
48
+ ## Format
49
  Each row in the dataset contains the following fields:
50
+ + **docid** (string): A unique identifier for each document in the corpus.
51
+ + **text** (string): The complete text content of the document. Contains the full body of web pages.
52
+ + **url** (string): The source URL where the document was retrieved from.
 
 
 
 
 
53
 
54
  ## How to use this dataset?
55
+ You can use thi
56
 
57
 
58