ZhouChuYue Cursor commited on
Commit
4a41aee
·
1 Parent(s): 98dbb6e

Update README: Simplify experimental setup, unify table column order, remove commonsense benchmarks

Browse files
Files changed (1) hide show
  1. README.md +18 -29
README.md CHANGED
@@ -135,44 +135,33 @@ We used the **MiniCPM-1.2B** model architecture and **MiniCPM3-4B** tokenizer fo
135
  - **Mathematical Reasoning:** GSM8K (4-shot), MATH (4-shot), Math-Bench
136
  - **Code Generation:** HumanEval (0-shot), MBPP (3-shot)
137
  - **Comprehensive Knowledge:** MMLU (5-shot), MMLU-STEM (5-shot)
138
- - **Commonsense Reasoning:** ARC-E/C (0-shot), BBH (3-shot), CommonSenseQA (8-shot), HellaSwag (0-shot), OpenBookQA (0-shot), PIQA (0-shot), SIQA (0-shot), Winogrande (0-shot)
139
 
140
  ### 🔧 Experimental Setup
141
 
142
- Following the **Decay Verification** method described in the paper, we evaluated the data quality by continuing the pre-training of a **MiniCPM-1.2B** base model (trained on 1.3T tokens).
143
-
144
- | Hyperparameter | Value | Description |
145
- | :--- | :--- | :--- |
146
- | **Base Model** | MiniCPM-1.2B | Pre-trained on 1.3T MiniCPM-4 corpus |
147
- | **Training Tokens** | ~100B | 20,000 steps |
148
- | **Data Mixture** | 30% Target / 70% Base | 30% UltraData-Math + 70% General Data |
149
- | **Sequence Length** | 4096 | |
150
- | **Global Batch Size** | 1280 | Micro batch size 10 |
151
- | **Learning Rate** | 7.5e-4 $\to$ 3.75e-5 | Exponential decay |
152
- | **Optimizer** | AdamW | with Maximal Update Parameterization (µP) |
153
 
154
  ### Effectiveness of L0 Parsing Strategy
155
 
156
  To fairly compare different parsing strategies, we conducted experiments on a data subset sampled from the **2023-2024** distribution. We re-parsed the raw HTML from this source using different parsers and **applied the same L1 cleaning operators to all baselines**. This comparison demonstrates the **overall benefit of our L0 Parser + L1 Filtering pipeline** against other parsers under identical cleaning conditions.
157
 
158
- | Parser | Average | MMLU | GSM8K | HumanEval | math | mbpp_full | mmlu-stem |
159
  |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
160
- | **UltraData-Math-L0-Parser (Ours)** | **43.44** | 51.41 | 54.97 | **31.71** | **28.72** | 47.10 | 46.76 |
161
- | trafilatura + w3m | 42.33 | 50.95 | 54.51 | 27.44 | 27.64 | **47.93** | 45.52 |
162
- | trafilatura | 42.44 | 51.42 | **56.03** | 26.83 | 28.08 | 45.64 | 46.62 |
163
- | Megamath | 42.32 | **51.46** | 54.06 | 29.88 | 26.04 | 45.64 | **46.81** |
164
- | magic-html + w3m | 41.29 | 51.23 | 51.63 | 26.83 | 26.58 | 45.02 | 46.45 |
165
 
166
 
167
  ### Pipeline Effectiveness (L1 vs L2 vs L3)
168
 
169
  To validate the effectiveness of our L0-L3 hierarchical framework, we conducted ablation studies comparing models trained on different tiers of UltraData-Math. Unlike the L0 parser comparison above (which used a 2023-2024 subset), these results are based on the **full dataset**.
170
 
171
- | Dataset | Average | MMLU | ARC-E | ARC-C | BBH | CSQA | Hella. | OBQA | PIQA | SIQA | Wino. | Math | GSM8K | MBPP | HumanEval | CMMLU | C-Eval |
172
- | :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
173
- | **UltraData-Math-L1** | 48.39 | 51.41 | 54.50 | 37.29 | 37.75 | 60.44 | 58.02 | 41.60 | 74.21 | 41.71 | 57.14 | 27.78 | 54.66 | 44.71 | 29.88 | 51.28 | 51.89 |
174
- | **UltraData-Math-L2** | 48.59 | 50.93 | 55.20 | 36.95 | 39.27 | 60.20 | 57.52 | 39.80 | 74.48 | 44.73 | 57.77 | 29.20 | 52.92 | 44.50 | 32.32 | 51.13 | 50.55 |
175
- | **UltraData-Math-L3** | **52.23** | **51.67** | **59.79** | **38.98** | **43.62** | **61.18** | **58.27** | **57.00** | **74.76** | 43.35 | **59.04** | **37.02** | **61.79** | **49.27** | **32.93** | **52.87** | **54.08** |
176
 
177
  *Note: Results demonstrate that higher-tier data (L3) significantly boosts mathematical reasoning (MATH, GSM8K) and general capabilities.*
178
 
@@ -180,13 +169,13 @@ To validate the effectiveness of our L0-L3 hierarchical framework, we conducted
180
 
181
  We used a single dataset for independent training to directly compare the effects of different data sources:
182
 
183
- | Model | Average | MMLU | GSM8K | HumanEval | math | mbpp_full | mmlu-stem | R-bench | Math-bench |
184
  |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
185
- | **UltraData-Math (Ours)** | **43.79** | 51.67 | **61.79** | 32.93 | **37.02** | **49.27** | 45.93 | 23.38 | **48.33** |
186
- | Nemotron-cc 4plus mind | 43.45 | 52.09 | 59.97 | 34.76 | 35.96 | 48.03 | 45.99 | **23.51** | 47.25 |
187
- | Nemotron-cc 4plus | 42.62 | 51.96 | 58.45 | **35.37** | 33.40 | 46.47 | 45.67 | 22.74 | 46.92 |
188
- | MegaMath-Web-Pro | 41.38 | **53.16** | 56.71 | 31.71 | 32.12 | 47.10 | **47.15** | 21.23 | 41.83 |
189
- | FineMath-4+ | 40.51 | 50.90 | 56.25 | 29.88 | 29.84 | 48.96 | 44.98 | 18.93 | 44.33 |
190
 
191
  ## ❤️ Acknowledgements
192
 
 
135
  - **Mathematical Reasoning:** GSM8K (4-shot), MATH (4-shot), Math-Bench
136
  - **Code Generation:** HumanEval (0-shot), MBPP (3-shot)
137
  - **Comprehensive Knowledge:** MMLU (5-shot), MMLU-STEM (5-shot)
 
138
 
139
  ### 🔧 Experimental Setup
140
 
141
+ We evaluated data quality using the **Decay Verification** method: continuing pre-training of a **MiniCPM-1.2B** base model (pre-trained on 1.3T tokens) with **~100B tokens** (30% target data + 70% general data).
 
 
 
 
 
 
 
 
 
 
142
 
143
  ### Effectiveness of L0 Parsing Strategy
144
 
145
  To fairly compare different parsing strategies, we conducted experiments on a data subset sampled from the **2023-2024** distribution. We re-parsed the raw HTML from this source using different parsers and **applied the same L1 cleaning operators to all baselines**. This comparison demonstrates the **overall benefit of our L0 Parser + L1 Filtering pipeline** against other parsers under identical cleaning conditions.
146
 
147
+ | Parser | Average | MMLU | MMLU-STEM | Math | GSM8K | MBPP | HumanEval |
148
  |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
149
+ | **UltraData-Math-L0-Parser (Ours)** | **43.44** | 51.41 | 46.76 | **28.72** | 54.97 | 47.10 | **31.71** |
150
+ | trafilatura + w3m | 42.33 | 50.95 | 45.52 | 27.64 | 54.51 | **47.93** | 27.44 |
151
+ | trafilatura | 42.44 | 51.42 | 46.62 | 28.08 | **56.03** | 45.64 | 26.83 |
152
+ | Megamath | 42.32 | **51.46** | **46.81** | 26.04 | 54.06 | 45.64 | 29.88 |
153
+ | magic-html + w3m | 41.29 | 51.23 | 46.45 | 26.58 | 51.63 | 45.02 | 26.83 |
154
 
155
 
156
  ### Pipeline Effectiveness (L1 vs L2 vs L3)
157
 
158
  To validate the effectiveness of our L0-L3 hierarchical framework, we conducted ablation studies comparing models trained on different tiers of UltraData-Math. Unlike the L0 parser comparison above (which used a 2023-2024 subset), these results are based on the **full dataset**.
159
 
160
+ | Dataset | Average | MMLU | MMLU-STEM | Math | GSM8K | MBPP | HumanEval |
161
+ | :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
162
+ | **UltraData-Math-L1** | 42.31 | 51.41 | 45.44 | 27.78 | 54.66 | 44.71 | 29.88 |
163
+ | **UltraData-Math-L2** | 42.57 | 50.93 | 45.52 | 29.20 | 52.92 | 44.50 | 32.32 |
164
+ | **UltraData-Math-L3** | **46.44** | **51.67** | **45.93** | **37.02** | **61.79** | **49.27** | **32.93** |
165
 
166
  *Note: Results demonstrate that higher-tier data (L3) significantly boosts mathematical reasoning (MATH, GSM8K) and general capabilities.*
167
 
 
169
 
170
  We used a single dataset for independent training to directly compare the effects of different data sources:
171
 
172
+ | Model | Average | MMLU | MMLU-STEM | Math | GSM8K | MBPP | HumanEval | R-bench | Math-bench |
173
  |:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
174
+ | **UltraData-Math (Ours)** | **43.79** | 51.67 | 45.93 | **37.02** | **61.79** | **49.27** | 32.93 | 23.38 | **48.33** |
175
+ | Nemotron-cc 4plus mind | 43.45 | 52.09 | 45.99 | 35.96 | 59.97 | 48.03 | 34.76 | **23.51** | 47.25 |
176
+ | Nemotron-cc 4plus | 42.62 | 51.96 | 45.67 | 33.40 | 58.45 | 46.47 | **35.37** | 22.74 | 46.92 |
177
+ | MegaMath-Web-Pro | 41.38 | **53.16** | **47.15** | 32.12 | 56.71 | 47.10 | 31.71 | 21.23 | 41.83 |
178
+ | FineMath-4+ | 40.51 | 50.90 | 44.98 | 29.84 | 56.25 | 48.96 | 29.88 | 18.93 | 44.33 |
179
 
180
  ## ❤️ Acknowledgements
181