File size: 1,375 Bytes
699561d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
language:
- en
- zh
license: apache-2.0
library_name: transformers
tags:
- qwen3
- text-generation
- casual-lm
base_model: Qwen/Qwen3-8B
pipeline_tag: text-generation
arxiv: 2601.21912
---

# Model Card for ProRAG

This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) based on the methodology described in the paper associated with arXiv ID: **2601.21912**.

## Model Details

- **Base Model:** Qwen3-8B
- **Language:** English, Chinese (and others supported by Qwen3)
- **Paper:** [View on arXiv](https://arxiv.org/abs/2601.21912)
- **Library:** Transformers

## 💻 Code & Inference

For inference code, usage examples, and reproduction scripts, please refer to our GitHub repository:

👉 **[Click here to view the GitHub Repository](https://github.com/lilinwz/ProRAG/tree/main)**

*(Please verify the details and instructions on the GitHub page.)*

## Citation

If you use this model or the associated paper in your research, please cite:

```bibtex
@misc{wang2026proragprocesssupervisedreinforcementlearning,
      title={ProRAG: Process-Supervised Reinforcement Learning for Retrieval-Augmented Generation}, 
      author={Zhao Wang and Ziliang Zhao and Zhicheng Dou},
      year={2026},
      eprint={2601.21912},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2601.21912}, 
}
```