ZhouChuYue commited on
Commit
c3a9d6a
·
1 Parent(s): 0fde1e0

Update README: Enhance Full Evaluation Results description and add R-Bench-Math

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -167,7 +167,7 @@ To validate the effectiveness of our L0-L3 hierarchical framework, we conducted
167
 
168
  ### Full Evaluation Results
169
 
170
- We used a single dataset for independent training to directly compare the effects of different data sources:
171
 
172
  | Model | Average | MMLU | MMLU-STEM | Math | GSM8K | MBPP | HumanEval | R-Bench-Math | Math-Bench |
173
  |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
 
167
 
168
  ### Full Evaluation Results
169
 
170
+ To compare against existing public mathematical pre-training datasets, we trained models independently on each dataset using the same model architecture and training budget (~100B tokens). The baselines include [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1), [MegaMath-Web-Pro](https://huggingface.co/datasets/LLM360/MegaMath), and [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath). All models are evaluated under identical conditions for a fair comparison:
171
 
172
  | Model | Average | MMLU | MMLU-STEM | Math | GSM8K | MBPP | HumanEval | R-Bench-Math | Math-Bench |
173
  |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|