Safetensors
English
qwen3
Suu commited on
Commit
4d4e361
·
verified ·
1 Parent(s): ea1e3cf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +96 -1
README.md CHANGED
@@ -4,4 +4,99 @@ language:
4
  - en
5
  base_model:
6
  - Qwen/Qwen3-8B-Base
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - en
5
  base_model:
6
  - Qwen/Qwen3-8B-Base
7
+ ---
8
+
9
+ # ✨ Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization
10
+ We present Klear-Reasoner, a model with long reasoning capabilities that demonstrates careful deliberation during problem solving, achieving outstanding performance across multiple benchmarks. We investigate two key issues with current clipping mechanisms in RL: Clipping suppresses critical exploration signals and ignores suboptimal trajectories. To address these challenges, we propose **G**radient-**P**reserving clipping **P**olicy **O**ptimization (**GPPO**) that gently backpropagates gradients from clipped tokens.
11
+
12
+
13
+ ## 📌 Overview
14
+
15
+ <div align="center">
16
+ <img src="./docker/main_result.png" width="100%"/>
17
+
18
+ <sub>Benchmark accuracy of Klear-Reasoner-8B on AIME 2024/2025 (avg@64), LiveCodeBench V5 (2024/08/01-2025/02/01, avg@8), and v6 (2025/02/01-2025/05/01, avg@8).</sub>
19
+ </div>
20
+
21
+ Klear-Reasoner is an 8-billion-parameter reasoning model that achieves **SOTA** performance on challenging **math and coding benchmarks**:
22
+
23
+ | Benchmark | AIME 2024 | AIME 2025 | LiveCodeBench V5 | LiveCodeBench V6 |
24
+ |---|---|---|---|---|
25
+ | **Score** | **90.5 %** | **83.2 %** | **66.0 %** | **58.1 %** |
26
+
27
+ The model combines:
28
+ 1. **Quality-centric long CoT SFT** – distilled from DeepSeek-R1-0528.
29
+ 2. **Gradient-Preserving Clipping Policy Optimization (GPPO)** – a novel RL method that **keeps gradients from clipped tokens** to boost exploration & convergence.
30
+
31
+ ---
32
+
33
+
34
+ ## 📊 Benchmark Results (Pass@1)
35
+
36
+ | Model | AIME2024<br>avg@64 | AIME2025<br>avg@64 | HMMT2025<br>avg@64 | LCB V5<br>avg@8 | LCB V6<br>avg@8 |
37
+ |-------|--------------------|--------------------|--------------------|-----------------|-----------------|
38
+ | AReal-boba-RL-7B | 61.9 | 48.3 | 29.4 | 34.3 | 31.0† |
39
+ | MiMo-7B-RL | 68.2 | 55.4 | 35.7 | 57.8 | 49.3 |
40
+ | Skywork-OR1-7B | 70.2 | 54.6 | 35.7 | 47.6 | 42.7 |
41
+ | AceReason-Nemotron-1.1-7B | 72.6 | 64.8 | 42.9 | 57.2 | 52.1 |
42
+ | POLARIS-4B-Preview | 81.2 | _79.4_ | 58.7 | 58.5† | 53.0† |
43
+ | Qwen3-8B | 76.0 | 67.3 | 44.7† | 57.5 | 48.4† |
44
+ | Deepseek-R1-0528-Distill-8B | _86.0_ | 76.3 | 61.5 | 61.0† | 51.6† |
45
+ | OpenReasoning-Nemotron-7B | 84.7 | 78.2 | 63.5 | _65.6_† | _56.3_† |
46
+ | Klear-Reasoner-8B-SFT | 75.6 | 70.1 | 57.6 | 58.5 | 49.6 |
47
+ | Klear-Reasoner-8B | 83.2 | 75.6 | 60.3 | 61.6 | 53.1 |
48
+ | *w/ 64K Inference Budget* | **90.5** | **83.2** | **70.8** | **66.0** | **58.1** |
49
+
50
+ > We report the average `pass@1` results (avg@_n_), with all other evaluation metrics following the DeepSeek-R1 assessment framework (temperature=0.6, top_p=0.95).
51
+
52
+
53
+ ---
54
+
55
+ ## 🧪 Training
56
+ ### Configure the experimental environment
57
+ ```bash
58
+ git clone https://github.com/suu990901/Klear_Reasoner.git
59
+ cd Klear_Reasoner
60
+ pip install -r requirements.txt
61
+ ```
62
+ For the code, we use [Firejail](https://github.com/netblue30/firejail) for the **sandbox** environment. Additionally, we implemented multi-process control based on [Pebble](https://github.com/noxdafox/pebble), which allows us to reclaim all resources allocated to a task when execution times out. For mathematics, we use [math_verify](https://github.com/huggingface/Math-Verify) for judging.
63
+
64
+ ### Using Ray for Multi-Node Training
65
+ For multi-node training​​, ensure ​​all nodes are started and connected via Ray​​ before executing the training script. Below is a brief setup guide for Ray across multiple machines:
66
+ #### Step 1: Start Ray on the Head Node (node0)
67
+
68
+ On the first node (typically called `node0`), run:
69
+
70
+ ```bash
71
+ ray start --head --dashboard-host=0.0.0.0
72
+ ```
73
+
74
+ Get the IP address of the master node.
75
+ ```bash
76
+ MASTER_IP=$(hostname -I | awk '{print $1}')
77
+ ```
78
+ #### Step 2: Connect Other Nodes (e.g., node1)
79
+
80
+ On each additional worker node (e.g., `node1`), run the following, replacing the IP with that of your head node:
81
+
82
+ ```bash
83
+ ray start --address=\"$MASTER_IP:6379\"
84
+ ```
85
+
86
+ ### RL Training
87
+ Run the following script on the master node to start the training task.
88
+
89
+ ```bash
90
+ bash recipe/dapo/perf_run_dapo_ours_math.sh # For Math RL
91
+ bash recipe/dapo/perf_run_dapo_ours_code.sh # For Code RL
92
+ ```
93
+
94
+ In the startup script, you need to set the following variables:
95
+ ```bash
96
+ YOUR_MODEL_PATH="<your_model_path>"
97
+ CKPTS_SAVE_DIR="<ckpts_save_path>"
98
+ YOUR_TRAIN_FILE="<train_data_path>"
99
+ YOUR_TEST_FILE="<test_data_path>"
100
+ ```
101
+
102
+