Update README.md
Browse files
README.md
CHANGED
|
@@ -19,7 +19,7 @@ library_name: transformers
|
|
| 19 |
We introduce **ReasonFlux-Coders**, trained with **CURE**, our algorithm for co-evolving an LLM's coding and unit test generation abilities.
|
| 20 |
|
| 21 |
* **ReasonFlux-Coder-7B** and **ReasonFlux-Coder-14B** outperform similarly sized Qwen Coders, DeepSeek Coders, and Seed-Coders, and naturally integrate into common test-time scaling and agentic coding pipelines.
|
| 22 |
-
* **ReasonFlux-Coder-4B** is our Long-CoT model, outperforming Qwen3-4B while achieving 64.8% efficiency in unit test generation. We have demonstrated its ability to serve as a reward model for training base models via reinforcement learning (see our [paper](https://arxiv.org/abs/
|
| 23 |
|
| 24 |
|
| 25 |
|
|
|
|
| 19 |
We introduce **ReasonFlux-Coders**, trained with **CURE**, our algorithm for co-evolving an LLM's coding and unit test generation abilities.
|
| 20 |
|
| 21 |
* **ReasonFlux-Coder-7B** and **ReasonFlux-Coder-14B** outperform similarly sized Qwen Coders, DeepSeek Coders, and Seed-Coders, and naturally integrate into common test-time scaling and agentic coding pipelines.
|
| 22 |
+
* **ReasonFlux-Coder-4B** is our Long-CoT model, outperforming Qwen3-4B while achieving 64.8% efficiency in unit test generation. We have demonstrated its ability to serve as a reward model for training base models via reinforcement learning (see our [paper](https://arxiv.org/abs/2506.03136)).
|
| 23 |
|
| 24 |
|
| 25 |
|