Datasets:
ZhouChuYue
commited on
Commit
·
c3a9d6a
1
Parent(s):
0fde1e0
Update README: Enhance Full Evaluation Results description and add R-Bench-Math
Browse files
README.md
CHANGED
|
@@ -167,7 +167,7 @@ To validate the effectiveness of our L0-L3 hierarchical framework, we conducted
|
|
| 167 |
|
| 168 |
### Full Evaluation Results
|
| 169 |
|
| 170 |
-
|
| 171 |
|
| 172 |
| Model | Average | MMLU | MMLU-STEM | Math | GSM8K | MBPP | HumanEval | R-Bench-Math | Math-Bench |
|
| 173 |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
|
|
|
| 167 |
|
| 168 |
### Full Evaluation Results
|
| 169 |
|
| 170 |
+
To compare against existing public mathematical pre-training datasets, we trained models independently on each dataset using the same model architecture and training budget (~100B tokens). The baselines include [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1), [MegaMath-Web-Pro](https://huggingface.co/datasets/LLM360/MegaMath), and [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath). All models are evaluated under identical conditions for a fair comparison:
|
| 171 |
|
| 172 |
| Model | Average | MMLU | MMLU-STEM | Math | GSM8K | MBPP | HumanEval | R-Bench-Math | Math-Bench |
|
| 173 |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|