Add pipeline tag and library name, add usage example and missing sections

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +52 -5
README.md CHANGED
@@ -1,10 +1,13 @@
1
  ---
2
- license: mit
3
- language:
4
- - en
5
  base_model:
6
  - Qwen/Qwen2.5-14B-Instruct
 
 
 
 
 
7
  ---
 
8
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/654d784d71a30c4bca09a319/Q7MVJfIHDerQ24c1zwZwK.png)
9
 
10
  <font size=3><div align='center' >
@@ -13,7 +16,7 @@ base_model:
13
  [[**📖 Paper**](https://arxiv.org/abs/2505.02387)]
14
  </div></font>
15
 
16
- # 🚀 Can we cast reward modeling as a reasoning task?
17
 
18
  **RM-R1** is a training framework for *Reasoning Reward Model* (ReasRM) that judges two candidate answers by first **thinking out loud**—generating rubrics or reasoning traces—then emitting its preference.
19
  Compared with prior scalar or vanilla generative reward models, RM-R1 delivers up to **+13.8 % absolute accuracy gains** on public reward model benchmarks while providing *fully interpretable* critiques.
@@ -28,4 +31,48 @@ Compared with prior scalar or vanilla generative reward models, RM-R1 delivers u
28
  ## Intended uses
29
  * **RLHF / RLAIF**: plug-and-play reward function for policy optimisation.
30
  * **Automated evaluation**: LLM-as-a-judge for open-domain QA, chat, and reasoning.
31
- * **Research**: study process supervision, chain-of-thought verification, or rubric generation.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
2
  base_model:
3
  - Qwen/Qwen2.5-14B-Instruct
4
+ language:
5
+ - en
6
+ license: mit
7
+ pipeline_tag: text-ranking
8
+ library_name: transformers
9
  ---
10
+
11
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/654d784d71a30c4bca09a319/Q7MVJfIHDerQ24c1zwZwK.png)
12
 
13
  <font size=3><div align='center' >
 
16
  [[**📖 Paper**](https://arxiv.org/abs/2505.02387)]
17
  </div></font>
18
 
19
+ # 🚀 Can we cast reward modeling as a reasoning task?\
20
 
21
  **RM-R1** is a training framework for *Reasoning Reward Model* (ReasRM) that judges two candidate answers by first **thinking out loud**—generating rubrics or reasoning traces—then emitting its preference.
22
  Compared with prior scalar or vanilla generative reward models, RM-R1 delivers up to **+13.8 % absolute accuracy gains** on public reward model benchmarks while providing *fully interpretable* critiques.
 
31
  ## Intended uses
32
  * **RLHF / RLAIF**: plug-and-play reward function for policy optimisation.
33
  * **Automated evaluation**: LLM-as-a-judge for open-domain QA, chat, and reasoning.
34
+ * **Research**: study process supervision, chain-of-thought verification, or rubric generation.
35
+
36
+ ```python
37
+ from transformers import pipeline
38
+
39
+ classifier = pipeline("text-classification", model="your_model", return_all_scores=True)
40
+ results = classifier("This is a great model!", candidate_labels=["positive", "negative"])
41
+ print(results)
42
+ ```
43
+
44
+ ## Training
45
+ - coming soon
46
+
47
+ ## Evaluation
48
+ - coming soon
49
+
50
+ ## Use Our Model
51
+ - coming soon
52
+
53
+ ## Build Your Own Dataset
54
+ - coming soon
55
+
56
+ ## Features
57
+ - Open release of trained model and the full accompanying datasets. ✔️
58
+ - End-to-end pipelines for both supervised fine-tuning (SFT) and reinforcement learning (RL). ✔️
59
+ - Support different RL frameworks. ✔️
60
+ - Support Slurm v.s. Interactive Training. ✔️
61
+ - Support multi-node, multi-gpu training. ✔️
62
+ - Support different LLMs. ✔️
63
+ - Support building your own custom dataset.
64
+ - One-command evaluation on public RM benchmarks for quick, reproducible reporting.
65
+
66
+ ## Acknowledgement
67
+ The concept of RM-R1 is inspired by [Deepseek-R1](https://github.com/deepseek-ai/DeepSeek-R1). Its implementation is built upon [veRL](https://github.com/volcengine/verl) and [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF). We sincerely appreciate the efforts of these teams for their contributions to open-source research and development.
68
+
69
+ ## Citations
70
+
71
+ ```bibtex
72
+ @misc{2505.02387,
73
+ Author = {Xiusi Chen and Gaotang Li and Ziqi Wang and Bowen Jin and Cheng Qian and Yu Wang and Hongru Wang and Yu Zhang and Denghui Zhang and Tong Zhang and Hanghang Tong and Heng Ji},
74
+ Title = {RM-R1: Reward Modeling as Reasoning},
75
+ Year = {2025},
76
+ Eprint = {arXiv:2505.02387},
77
+ }
78
+ ```