bmbgsj commited on
Commit
f35a3f2
·
verified ·
1 Parent(s): 6939631

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -37
README.md CHANGED
@@ -1,57 +1,53 @@
1
  ---
 
 
 
 
2
  library_name: transformers
3
- model_name: prm
4
  tags:
5
- - generated_from_trainer
6
- - reward-trainer
7
- - trl
8
- licence: license
 
 
9
  ---
10
 
11
- # Model Card for prm
12
 
13
- This model is a fine-tuned version of [None](https://huggingface.co/None).
14
- It has been trained using [TRL](https://github.com/huggingface/trl).
15
 
16
- ## Quick start
17
 
18
- ```python
19
- from transformers import pipeline
20
 
21
- text = "The capital of France is Paris."
22
- rewarder = pipeline(model="None", device="cuda")
23
- output = rewarder(text)[0]
24
- print(output["score"])
25
- ```
26
 
27
- ## Training procedure
28
 
29
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lilin22/zhaowang/runs/7233254701.72867-f9dda944-4408)
30
 
 
31
 
32
- This model was trained with Reward.
33
 
34
- ### Framework versions
35
 
36
- - TRL: 0.26.2
37
- - Transformers: 4.57.3
38
- - Pytorch: 2.8.0
39
- - Datasets: 4.4.2
40
- - Tokenizers: 0.22.1
41
 
42
- ## Citations
43
 
44
-
45
-
46
- Cite TRL as:
47
-
48
  ```bibtex
49
- @misc{vonwerra2022trl,
50
- title = {{TRL: Transformer Reinforcement Learning}},
51
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
52
- year = 2020,
53
- journal = {GitHub repository},
54
- publisher = {GitHub},
55
- howpublished = {\url{https://github.com/huggingface/trl}}
 
56
  }
57
  ```
 
1
  ---
2
+ language:
3
+ - en
4
+ - zh
5
+ license: apache-2.0
6
  library_name: transformers
 
7
  tags:
8
+ - qwen3
9
+ - reward-model
10
+ - text-classification
11
+ base_model: Qwen/Qwen3-8B
12
+ pipeline_tag: text-classification
13
+ arxiv: 2601.21912
14
  ---
15
 
16
+ # Model Card for ProRAG-PRM
17
 
18
+ This is the **Process Reward Model (PRM)** associated with the ProRAG project. It is fine-tuned from [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) to evaluate the quality of intermediate reasoning steps.
 
19
 
20
+ Based on the methodology described in the paper associated with arXiv ID: **2601.21912**.
21
 
22
+ ## Model Details
 
23
 
24
+ - **Base Model:** Qwen3-8B
25
+ - **Type:** Process Reward Model (PRM) / Sequence Classification
26
+ - **Task:** Step-by-step Reasoning Evaluation
27
+ - **Paper:** [View on arXiv](https://arxiv.org/abs/2601.21912)
 
28
 
29
+ ## 💻 Code & Inference
30
 
31
+ This model is designed to assign rewards/scores to reasoning steps.
32
 
33
+ For the specific scoring logic, data formatting (e.g., how to mark steps), and inference scripts, please refer to our GitHub repository:
34
 
35
+ 👉 **[Click here to view the GitHub Repository](https://github.com/lilinwz/ProRAG/tree/main)**
36
 
37
+ *(Please ensure you use the correct scoring script provided in the repo, as standard Hugging Face pipelines may not interpret the process rewards correctly without specific formatting.)*
38
 
39
+ ## Citation
 
 
 
 
40
 
41
+ If you use this model or the associated paper in your research, please cite:
42
 
 
 
 
 
43
  ```bibtex
44
+ @misc{wang2026proragprocesssupervisedreinforcementlearning,
45
+ title={ProRAG: Process-Supervised Reinforcement Learning for Retrieval-Augmented Generation},
46
+ author={Zhao Wang and Ziliang Zhao and Zhicheng Dou},
47
+ year={2026},
48
+ eprint={2601.21912},
49
+ archivePrefix={arXiv},
50
+ primaryClass={cs.AI},
51
+ url={https://arxiv.org/abs/2601.21912},
52
  }
53
  ```