yinjiewang commited on
Commit
55db3a1
·
verified ·
1 Parent(s): 42ead8b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -8,6 +8,11 @@ library_name: transformers
8
  </p>
9
 
10
 
 
 
 
 
 
11
 
12
  # Introduction to our ReasonFlux-Coders
13
 
@@ -16,7 +21,7 @@ We introduce **ReasonFlux-Coders**, trained with **CURE**, our algorithm for co-
16
  * **ReasonFlux-Coder-7B** and **ReasonFlux-Coder-14B** outperform similarly sized Qwen Coders, DeepSeek Coders, and Seed-Coders, and naturally integrate into common test-time scaling and agentic coding pipelines.
17
  * **ReasonFlux-Coder-4B** is our Long-CoT model, outperforming Qwen3-4B while achieving 64.8% efficiency in unit test generation. We have demonstrated its ability to serve as a reward model for training base models via reinforcement learning (see our [paper](https://arxiv.org/abs/2505.15809)).
18
 
19
- [Paper](https://arxiv.org/abs/2505.15809) | [Code](https://github.com/Gen-Verse/CURE)
20
 
21
  # Citation
22
 
 
8
  </p>
9
 
10
 
11
+ <p align="center">
12
+ <img src="https://github.com/yinjjiew/Data/raw/main/cure/results.png" width="100%"/>
13
+ </p>
14
+
15
+ [Paper](https://arxiv.org/abs/2505.15809) | [Code](https://github.com/Gen-Verse/CURE)
16
 
17
  # Introduction to our ReasonFlux-Coders
18
 
 
21
  * **ReasonFlux-Coder-7B** and **ReasonFlux-Coder-14B** outperform similarly sized Qwen Coders, DeepSeek Coders, and Seed-Coders, and naturally integrate into common test-time scaling and agentic coding pipelines.
22
  * **ReasonFlux-Coder-4B** is our Long-CoT model, outperforming Qwen3-4B while achieving 64.8% efficiency in unit test generation. We have demonstrated its ability to serve as a reward model for training base models via reinforcement learning (see our [paper](https://arxiv.org/abs/2505.15809)).
23
 
24
+
25
 
26
  # Citation
27