GraphWalker-7B

📄 Paper (arXiv:2603.28533) | 💻 GitHub | 🤗 Model

GraphWalker-7B is a specialized large language model fine-tuned from Qwen2.5-7B-Instruct for Agentic Knowledge Graph Question Answering (KGQA). GraphWalker learns to navigate knowledge graphs via synthetic trajectory curriculum — achieving strong generalization with a single, compact 7B model.


🌟 Overview

GraphWalker is an agentic framework for multi-turn Knowledge Graph Question Answering (KGQA) over Global Knowledge Graphs (e.g., Freebase). It transforms LLMs into reasoning agents that autonomously navigate massive KGs through a "Think-Query-Observe" loop, optimized via a synthetic curriculum.


🛠️ Usage

1. Environment Setup

pip install vllm transformers

2. Download the Model

# Via huggingface-cli
huggingface-cli download <your-org>/GraphWalker-7B --local-dir ./GraphWalker-7B

3. Inference with vLLM (Recommended)

Start the vLLM server:

vllm serve "./GraphWalker-7B" \
    --host 0.0.0.0 --port 22240 \
    --served-model-name graphwalker-7b \
    --gpu-memory-utilization 0.9 \
    --dtype auto \
    --chat-template "./GraphWalker-7B/chat_template.jinja"

For training and evaluation, see 💻 GitHub for details.


📈 Evaluation Results

Method Backbone CWQ EM CWQ F1 WebQSP EM WebQSP F1
GraphWalker
†Vanilla Agent Qwen2.5-7B-Instruct 40.7 33.2 68.4 66.1
†Vanilla Agent GPT-4o-mini 63.4 60.3 79.6 70.6
†Vanilla Agent DeepSeek-V3.2 69.8 63.5 76.7 71.8
GraphWalker-7B-SFT Qwen2.5-7B-Instruct 68.3 63.2 82.0 79.1
GraphWalker-3B-SFT-RL Qwen2.5-3B-Instruct 70.9 65.2 83.5 81.7
GraphWalker-8B-SFT-RL LLaMA3.1-8B-Instruct 78.5 69.6 88.2 84.5
GraphWalker-7B-SFT-RL Qwen2.5-7B-Instruct 79.6 74.2 91.5 88.6

📝 Citation

If you use GraphWalker-7B or find this work helpful, please cite:

@misc{xu2026graphwalkeragenticknowledgegraph,
      title={GraphWalker: Agentic Knowledge Graph Question Answering via Synthetic Trajectory Curriculum}, 
      author={Shuwen Xu and Yao Xu and Jiaxiang Liu and Chenhao Yuan and Wenshuo Peng and Jun Zhao and Kang Liu},
      year={2026},
      eprint={2603.28533},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2603.28533}, 
}

📄 License

This model is released under the Apache 2.0 License, consistent with the base model Qwen2.5-7B-Instruct.

Downloads last month
31
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for xushuwen23/GraphWalker-7B

Base model

Qwen/Qwen2.5-7B
Finetuned
(3139)
this model
Quantizations
2 models

Paper for xushuwen23/GraphWalker-7B