yinjiewang commited on
Commit
6890ee5
·
verified ·
1 Parent(s): ee50f86

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -5
README.md CHANGED
@@ -3,16 +3,21 @@ license: mit
3
  ---
4
 
5
  <p align="center">
6
- <img src="https://github.com/Gen-Verse/dLLM-RL/raw/main/assets/figure1.png" width="100%"/>
7
  </p>
8
 
9
 
10
- # Introduction to our ReasonFlux-Coders
 
 
 
 
 
11
 
12
- We introduce **ReasonFlux-Coders**, trained with **CURE**, our algorithm for co-evolving an LLM's coding and unit test generation abilities.
13
 
14
- * **ReasonFlux-Coder-7B** and **ReasonFlux-Coder-14B** outperform similarly sized Qwen Coders, DeepSeek Coders, and Seed-Coders, and naturally integrate into common test-time scaling and agentic coding pipelines.
15
- * **ReasonFlux-Coder-4B** is our Long-CoT model, outperforming Qwen3-4B while achieving 64.8% efficiency in unit test generation. We have demonstrated its ability to serve as a reward model for training base models via reinforcement learning (see our [paper](https://arxiv.org/abs/2506.03136)).
16
 
17
  [Paper](https://arxiv.org/abs/2506.03136) | [Code](https://github.com/Gen-Verse/CURE)
18
 
 
3
  ---
4
 
5
  <p align="center">
6
+ <img src="https://github.com/yinjjiew/Data/blob/main/dllm-rl/figure1.png" width="100%"/>
7
  </p>
8
 
9
 
10
+ <p align="center">
11
+ <img src="https://github.com/yinjjiew/Data/blob/main/dllm-rl/maintable.png" width="100%"/>
12
+ </p>
13
+
14
+
15
+ # Introduction to TraDo
16
 
17
+ We introduce **TraDo**, SOTA diffusion language model, trained with **TraceRL**.
18
 
19
+ * **TraDo-4B-Instruct** and **TraDo-8B-Instruct** outperform similarly sized strong AR models across math reasoning tasks.
20
+ * **TraDo-8B-Thinking** is the first Long-CoT diffusion model.
21
 
22
  [Paper](https://arxiv.org/abs/2506.03136) | [Code](https://github.com/Gen-Verse/CURE)
23