Add model card for Thinker model
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,3 +1,31 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
library_name: transformers
|
| 4 |
+
pipeline_tag: text-generation
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# Thinker: Training LLMs in Hierarchical Thinking for Deep Search via Multi-Turn Interaction
|
| 8 |
+
|
| 9 |
+
This repository contains the KAG-Thinker model presented in the paper [Thinker: Training LLMs in Hierarchical Thinking for Deep Search via Multi-Turn Interaction](https://huggingface.co/papers/2511.07943).
|
| 10 |
+
|
| 11 |
+
## Abstract
|
| 12 |
+
Efficient retrieval of external knowledge bases and web pages is crucial for enhancing the reasoning abilities of LLMs. Previous works on training LLMs to leverage external retrievers for solving complex problems have predominantly employed end-to-end reinforcement learning. However, these approaches neglect supervision over the reasoning process, making it difficult to guarantee logical coherence and rigor. To address these limitations, we propose Thinker, a hierarchical thinking model for deep search through multi-turn interaction, making the reasoning process supervisable and verifiable. It decomposes complex problems into independently solvable sub-problems, each dually represented in both natural language and an equivalent logical function to support knowledge base and web searches. Concurrently, dependencies between sub-problems are passed as parameters via these logical functions, enhancing the logical coherence of the problem-solving process. To avoid unnecessary external searches, we perform knowledge boundary determination to check if a sub-problem is within the LLM's intrinsic knowledge, allowing it to answer directly. Experimental results indicate that with as few as several hundred training samples, the performance of Thinker is competitive with established baselines. Furthermore, when scaled to the full training set, Thinker significantly outperforms these methods across various datasets and model sizes.
|
| 13 |
+
|
| 14 |
+
## Code and More Information
|
| 15 |
+
For detailed installation instructions, training scripts, additional inference examples, and full evaluation results, please refer to the official GitHub repository:
|
| 16 |
+
[https://github.com/OpenSPG/KAG-Thinker](https://github.com/OpenSPG/KAG-Thinker)
|
| 17 |
+
|
| 18 |
+
## Citation
|
| 19 |
+
If you find our work helpful or inspiring, please feel free to cite it:
|
| 20 |
+
|
| 21 |
+
```bibtex
|
| 22 |
+
@misc{zhang2025kagthinkerinteractivethinkingdeep,
|
| 23 |
+
title={KAG-Thinker: Interactive Thinking and Deep Reasoning in LLMs via Knowledge-Augmented Generation},
|
| 24 |
+
author={Dalong Zhang and Jun Xu and Jun Zhou and Lei Liang and Lin Yuan and Ling Zhong and Mengshu Sun and Peilong Zhao and QiWei Wang and Xiaorui Wang and Xinkai Du and YangYang Hou and Yu Ao and ZhaoYang Wang and Zhengke Gui and ZhiYing Yi and Zhongpu Bo},
|
| 25 |
+
year={2025},
|
| 26 |
+
eprint={2506.17728},
|
| 27 |
+
archivePrefix={arXiv},
|
| 28 |
+
primaryClass={cs.CL},
|
| 29 |
+
url={https://arxiv.org/abs/2506.17728},
|
| 30 |
+
}
|
| 31 |
+
```
|