|
|
--- |
|
|
base_model: |
|
|
- Qwen/Qwen2.5-32B-Instruct |
|
|
license: mit |
|
|
pipeline_tag: text-generation |
|
|
library_name: transformers |
|
|
--- |
|
|
|
|
|
# MedResearcher-R1: Expert-Level Medical Deep Researcher via A Knowledge-Informed Trajectory Synthesis Framework |
|
|
|
|
|
[](https://arxiv.org/abs/2508.14880) |
|
|
[](https://github.com/AQ-MedAI/MedResearcher-R1) |
|
|
[](https://github.com/AQ-MedAI/MedResearcher-R1/blob/main/LICENSE) |
|
|
|
|
|
### Author List |
|
|
Ailing Yu, Lan Yao, Jingnan Liu, Zhe Chen, Jiajun Yin, Yuan Wang, Xinhao Liao, Zhiling Ye, Ji Li, Yun Yue, Hansong Xiao, Hualei Zhou, Chunxiao Guo, Peng Wei, Jinjie Gu |
|
|
|
|
|
### Abstract |
|
|
Recent developments in Large Language Model (LLM)-based agents have shown impressive capabilities spanning multiple domains, exemplified by deep research systems that demonstrate superior performance on complex information-seeking and synthesis tasks. While general-purpose deep research agents have shown impressive capabilities, they struggle significantly with medical domain challenges—the MedBrowseComp benchmark reveals even GPT-o3 deep research, the leading proprietary deep research system, achieves only 25.5% accuracy on complex medical queries. The key limitations are: (1) insufficient dense medical knowledge for clinical reasoning, and (2) lack of medical-specific retrieval tools. We present a medical deep research agent that addresses these challenges through two core innovations. First, we develop a novel data synthesis framework using medical knowledge graphs, extracting longest chains from subgraphs around rare medical entities to generate complex multi-hop QA pairs. Second, we integrate a custom-built private medical retrieval engine alongside general-purpose tools, enabling accurate medical information synthesis. Our approach generates 2,100 diverse trajectories across 12 medical specialties, each averaging 4.2 tool interactions. Through a two-stage training paradigm combining supervised fine-tuning and online reinforcement learning with composite rewards, our open-source 32B model achieves competitive performance on general benchmarks (GAIA: 53.4, xBench: 54), comparable to GPT-4o-mini, while outperforming significantly larger proprietary models. More importantly, we establish new state-of-the-art on MedBrowseComp with 27.5% accuracy, surpassing leading closed-source deep research systems including O3 deepresearch, substantially advancing medical deep research capabilities. Our work demonstrates that strategic domain-specific innovations in architecture, tool design, and training data construction can enable smaller open-source models to outperform much larger proprietary systems in specialized domains. Code and datasets will be released to facilitate further research. |
|
|
|
|
|
<div align="center"> |
|
|
<img src="https://github.com/AQ-MedAI/MedResearcher-R1/raw/main/assets/logo.png" alt="logo" width="300"/> |
|
|
</div> |
|
|
|
|
|
**MedResearcher-R1** is a comprehensive **training data generation and synthesis framework** that tackles the challenge of domain-specific AI reasoning through **knowledge-informed trajectory synthesis**. Our framework provides an end-to-end solution for generating high-quality training data, consisting of three integrated components: |
|
|
|
|
|
**🧠 Knowledge Graph Construction**: Our core innovation - an intelligent knowledge graph construction and QA synthesis system that transforms domain knowledge into high-quality question-answer pairs with automated reasoning path generation. This module serves as the foundation for creating domain-specific training data. |
|
|
|
|
|
<div align="center"> |
|
|
<img src="https://github.com/AQ-MedAI/MedResearcher-R1/raw/main/assets/qa_generation_system.png" alt="Knowledge Graph Construction Diagram"/> |
|
|
</div> |
|
|
|
|
|
**🔄 Trajectory Generation Pipeline**: End-to-end trajectory synthesis and optimization system that converts QA pairs into multi-turn reasoning trajectories with tool interactions and quality filtering for model training. |
|
|
|
|
|
**📊 Evaluation Pipeline**: Comprehensive model evaluation and validation framework for assessing reasoning performance across multiple benchmarks and validating the quality of synthesized training data. |
|
|
|
|
|
These three components form a complete **training data production pipeline** from knowledge extraction to model training data generation and evaluation, enabling the creation of specialized reasoning models for domain-specific applications. |
|
|
|
|
|
## Features |
|
|
- **Knowledge Graph Construction** |
|
|
- **Interface Support**: Interactive web visualization with D3.js force-directed graphs |
|
|
- **Advanced Sampling Algorithms**: 5 sophisticated strategies (mixed, augmented_chain, community_core_path, dual_core_bridge, max_chain) for complex subgraph extraction |
|
|
- **Unified QA Generation**: Deep concept obfuscation with quantitative reasoning and multi-paradigm question synthesis |
|
|
- **Reasoning Path Generation**: Automated cheat_sheet creation with detailed step-by-step reasoning guidance for complex multi-hop questions |
|
|
- **Batch Processing System**: Concurrent QA generation with intelligent QPS control, progress monitoring, and resume capability |
|
|
|
|
|
- **Trajectory Generation Pipeline** |
|
|
- **Agent Framework**: Multi-turn reasoning with tool integration and concurrent task processing |
|
|
- **Advanced Quality Filtering**: Token-based validation, tool call/response matching, and automated error detection |
|
|
- **Intelligent Rewriting System**: LLM-powered trajectory optimization with Masked Trajectory Guidance (MTG) |
|
|
|
|
|
- **Evaluation Pipeline** |
|
|
- **Interactive Question Reasoning**: Single question mode with detailed step-by-step process visualization |
|
|
- **Batch Dataset Evaluation**: Multi-worker parallel processing with configurable rollouts and timeout controls |
|
|
|
|
|
## Performance Highlights |
|
|
|
|
|
Using our knowledge-informed trajectory synthesis framework, we developed **MedResearcher-R1**, a specialized reasoning model that demonstrates exceptional performance across multiple challenging benchmarks including MedBrowseComp, GAIA, and XBench-DeepSearch. |
|
|
|
|
|
<div align="center"> |
|
|
<img src="https://github.com/AQ-MedAI/MedResearcher-R1/raw/main/assets/performance.jpg" alt="Performance Table"/> |
|
|
</div> |
|
|
|
|
|
## Open-Sourced Dataset |
|
|
|
|
|
We have open-sourced a high-quality QA dataset constructed through our KnowledgeGraphConstruction module. The dataset is available at [`TrajectoryGenerationPipeline/qa_data/open_data.jsonl`](https://github.com/AQ-MedAI/MedResearcher-R1/blob/main/TrajectoryGenerationPipeline/qa_data/open_data.jsonl) and contains: |
|
|
|
|
|
- **Complex reasoning question-answer pairs** Multi-hop qa-pairs generated using our graph method |
|
|
- **Detailed step-by-step reasoning paths** for each question, providing comprehensive problem-solving guidance |
|
|
|
|
|
## Quick start: Run Model for Evaluation |
|
|
|
|
|
You can run a server for the model via `sglang` or `vllm` for evaluation, as described in the GitHub repository's [Quick start](https://github.com/AQ-MedAI/MedResearcher-R1#quick-start) section. |
|
|
|
|
|
First, install `sglang` (e.g., `pip install sglang[all]`): |
|
|
```bash |
|
|
pip install sglang[all] |
|
|
CUDA_VISIBLE_DEVICES=0,1 python -m sglang.launch_server --model-path /path/to/your/model --port 6001 --host 0.0.0.0 --mem-fraction-static 0.95 --tp-size 2 |
|
|
``` |
|
|
Then, you can evaluate model performance using the Evaluation Pipeline as detailed in the [GitHub repo](https://github.com/AQ-MedAI/MedResearcher-R1): |
|
|
```bash |
|
|
cd ../EvaluationPipeline |
|
|
# Run single question evaluation |
|
|
python eval_cli.py --mode interactive |
|
|
|
|
|
# Run batch dataset evaluation |
|
|
python eval_cli.py --mode batch --dataset sample --workers 20 |
|
|
``` |
|
|
|
|
|
## ✍️ Citation |
|
|
```bibtex |
|
|
@article{ant2025medresearcher, |
|
|
title={MedReseacher-R1: Expert-Level Medical Deep Researcher via A Knowledge-Informed Trajectory Synthesis Framework}, |
|
|
author={Ailing Yu, Lan Yao, Jingnan Liu, Zhe Chen, Jiajun Yin, Yuan Wang, Xinhao Liao, Zhiling Ye, Ji Li, Yun Yue, Hansong Xiao, Hualei Zhou, Chunxiao Guo, Peng Wei, Jinjie Gu}, |
|
|
journal={arXiv preprint}, |
|
|
url={https://arxiv.org/abs/2508.14880}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|
|
|
## 📜 License |
|
|
MedReseacher-R1 is licensed under the MIT license. |
|
|
|
|
|
--- |
|
|
|
|
|
<div align="center"> |
|
|
|
|
|
[](https://star-history.com/#AQ-MedAI/MedResearcher-R1&Date) |
|
|
|
|
|
</div> |