ilgee commited on
Commit
87aa611
·
verified ·
1 Parent(s): a9c8eae

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +3 -8
README.md CHANGED
@@ -9,9 +9,9 @@ tags:
9
  - preference-learning
10
  ---
11
 
12
- # Binary-Think-RM
13
 
14
- Binary-Think-RM is a generative reward model with long-horizon reasoning capabilities, introduced in the paper [Think-RM: Enabling Long-Horizon Reasoning in Generative Reward Models](https://arxiv.org/abs/2505.16265).
15
 
16
  This model is fine-tuned from [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) using a two-stage training process: (1) reasoning-oriented supervised fine-tuning (SFT) using [ilgee/hs2-naive-reasoning-binary-max](https://huggingface.co/datasets/ilgee/hs2-naive-reasoning-binary-max) and (2) reinforcement learning with verifiable rewards (RLVR) using a prompt part of [ilgee/hs2-naive-reasoning-binary-max](https://huggingface.co/datasets/ilgee/hs2-naive-reasoning-binary-max).
17
 
@@ -76,12 +76,7 @@ message = tokenizer.apply_chat_template(
76
 
77
  ## Performance
78
 
79
- Binary-Think-RM demonstrates significant improvements over baseline reward models:
80
-
81
- - **RewardBench**: Up to 5% average improvement, with >10% gains on Chat Hard and >5% on Reasoning subcategories
82
- - **RM-Bench**: Up to 8% average improvement, with 12% gains in the Math domain
83
- - **HelpSteer3-Preference**: Achieves the highest scores on this reasoning-heavy code domain
84
- - Strong generalization to out-of-distribution tasks
85
 
86
  ## Citation
87
 
 
9
  - preference-learning
10
  ---
11
 
12
+ # Binary-Think-RM-8B
13
 
14
+ Binary-Think-RM-8B is a generative reward model with long-horizon reasoning capabilities, introduced in the paper [Think-RM: Enabling Long-Horizon Reasoning in Generative Reward Models](https://arxiv.org/abs/2505.16265).
15
 
16
  This model is fine-tuned from [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) using a two-stage training process: (1) reasoning-oriented supervised fine-tuning (SFT) using [ilgee/hs2-naive-reasoning-binary-max](https://huggingface.co/datasets/ilgee/hs2-naive-reasoning-binary-max) and (2) reinforcement learning with verifiable rewards (RLVR) using a prompt part of [ilgee/hs2-naive-reasoning-binary-max](https://huggingface.co/datasets/ilgee/hs2-naive-reasoning-binary-max).
17
 
 
76
 
77
  ## Performance
78
 
79
+ For detailed performance metrics on RewardBench, RM-Bench, HelpSteer2-Preference, and HelpSteer3-Preference, please refer to Tables 1, 2, and 3 in the paper: [Think-RM: Enabling Long-Horizon Reasoning in Generative Reward Models](https://arxiv.org/abs/2505.16265)
 
 
 
 
 
80
 
81
  ## Citation
82