BitStarWalkin commited on
Commit
dd1ecb6
·
verified ·
1 Parent(s): c4e018c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -44
README.md CHANGED
@@ -11,50 +11,80 @@ model-index:
11
  results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
16
 
17
  # ReasonFlux-F1-32B
18
 
19
- This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) on the ReasonFlux-F1-SFT dataset.
20
-
21
- ## Model description
22
-
23
- More information needed
24
-
25
- ## Intended uses & limitations
26
-
27
- More information needed
28
-
29
- ## Training and evaluation data
30
-
31
- More information needed
32
-
33
- ## Training procedure
34
-
35
- ### Training hyperparameters
36
-
37
- The following hyperparameters were used during training:
38
- - learning_rate: 1e-05
39
- - train_batch_size: 1
40
- - eval_batch_size: 8
41
- - seed: 42
42
- - distributed_type: multi-GPU
43
- - num_devices: 8
44
- - gradient_accumulation_steps: 2
45
- - total_train_batch_size: 16
46
- - total_eval_batch_size: 64
47
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
- - lr_scheduler_type: cosine
49
- - num_epochs: 5.0
50
-
51
- ### Training results
52
-
53
-
54
-
55
- ### Framework versions
56
-
57
- - Transformers 4.49.0
58
- - Pytorch 2.5.1+cu124
59
- - Datasets 3.2.0
60
- - Tokenizers 0.21.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  results: []
12
  ---
13
 
14
+ # ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought Templates
15
+ Revolutionary inference-scaling paradigm with a hierarchical RL algorithm: enpowering a 32B model with 500 thought templates to outperform o1-preview and DeepSeek-V3 in reasoning tasks.
16
+ | Task | ReasonFlux-F1-32B | **ReasonFlux -Zero-32B** | **R1-Distill-32B** | **OpenAI o1-mini** | **LIMO -32B** | **s1-32B** |
17
+ | :------------- | :----------------: | :-------------: | :-------------------: | :-----------------: | :--------: | :--------: |
18
+ | MATH500 | **96.0** | 91.2 | 94.3 | 90.0 | 90.6 | 84.8 |
19
+ | AIME 2024 | **76.67** | 56.7 | 72.6 | 56.7 | 50.0 | 36.0 |
20
+ | AIME 2025 | **53.33** | 37.2 | 46.67 | 50.8 | 37.2 | 26.7 |
21
+ | GPQA-Diamond | **67.17** | 61.2 | 62.1 | 60.0 | 65.2 | 59.6 |
22
 
23
  # ReasonFlux-F1-32B
24
 
25
+ > ReasonFlux-F1-32B is our finetuned SOTA-level reasoning LLMs by leveraging the template-augmented reasoning trajectories based on our ReasonFlux-Zero.
26
+
27
+ * Github Repository: [Gen-Verse/ReasonFlux](https://github.com/Gen-Verse/ReasonFlux)
28
+ * Paper:[ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought Templates](https://arxiv.org/abs/2502.06772)
29
+ * Dataset: [Gen-Verse/ReasonFlux-F1-SFT](https://huggingface.co/datasets/Gen-Verse/ReasonFlux-F1-SFT)
30
+ * Base Model: [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)
31
+
32
+
33
+ ## Evaluation
34
+ We present the evaluation results of our ReasonFlux-F1-32B on challenging reasoning tasks including AIME2024,AIM2025,MATH500 and GPQA-Diamond. To make a fair comparison, we report the results of the LLMs on our evaluation scripts in [ReasonFlux-F1]().
35
+
36
+ | Model | AIME2024@pass1 | AIME2025@pass1 | MATH500@pass1 | GPQA@pass1 |
37
+ | --------------------------------------- | -------------- | -------------- | ------------- | ---------- |
38
+ | QwQ-32B-Preview | 46.7 | 37.2 | 90.6 | 65.2 |
39
+ | LIMO-32B | 56.3 | 44.5 | 94.80 | 58.08 |
40
+ | s1-32B | 56.7 | 26.7 | 93.0 | 59.6 |
41
+ | OpenThinker-32B | 66.0 | 53.3 | 94.8 | 60.10 |
42
+ | FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview | 76.67 | 40.0 | 93.4 | 59.09 |
43
+ | R1-Distill-32B | 70 | 46.67 | 92 | 59.59 |
44
+ | ReasonFlux-Zero-32B | 56.7 | 37.2 | 91.2 | 61.2 |
45
+ | **ReasonFlux-F1-32B** | **76.67** | **53.33** | **96.0** | **67.17** |
46
+
47
+
48
+ ## Quick start with VLLM
49
+ ```python
50
+ from vllm import LLM, SamplingParams
51
+ from transformers import AutoTokenizer
52
+
53
+ model_id = 'Gen-Verse/ReasonFlux-F1'
54
+
55
+ model = LLM(
56
+ model_id,
57
+ tensor_parallel_size=8,
58
+ )
59
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
60
+
61
+ # stop_token_ids = tok("<|im_end|>\n")["input_ids"]
62
+
63
+ sampling_params = SamplingParams(
64
+ max_tokens=32768,
65
+ )
66
+ # 2022 AIME I Problems/Problem 15
67
+ question = """Let \(x, y\), and \(z\) be positive real numbers satisfying the system of equations:
68
+ \[
69
+ \begin{array}{c}
70
+ \sqrt{2 x-x y}+\sqrt{2 y-x y}=1 \\
71
+ \sqrt{2 y-y z}+\sqrt{2 z-y z}=\sqrt{2} \\
72
+ \sqrt{2 z-z x}+\sqrt{2 x-z x}=\sqrt{3} .
73
+ \end{array}
74
+ \]
75
+ Then \(\left[(1-x)(1-y)(1-z)\right]^{2}\) can be written as \(\frac{m}{n}\), where \(m\) and \(n\) are relatively prime positive integers. Find \(m+n\)."""
76
+ ds_prompt="<|User|>\n" + question + "<|Assistant|>\n"
77
+ output = model.generate(ds_prompt, sampling_params=sampling_params)
78
+ print(output[0].outputs[0].text)
79
+
80
+ ```
81
+ ## Citation
82
+
83
+ ```bash
84
+ @article{yang2025reasonflux,
85
+ title={ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought Templates},
86
+ author={Yang, Ling and Yu, Zhaochen and Cui, Bin and Wang, Mengdi},
87
+ journal={arXiv preprint arXiv:2502.06772},
88
+ year={2025}
89
+ }
90
+ ```